How do I convince programmers in my team to do TDD? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am aware of this question: https://stackoverflow.com/questions/428691/how-to-encourage-implementation-of-tdd
In my team, we write a lot of unit tests. But, in general the programmers tend to write unit tests after wrting the code. So, we first finish the module functionality and then write tests. Our coverage is around 70% for most modules. I have tried convincing my technical manager and my team members to do pure TDD wherein we first write tests and then the code, but invain. I think writing tests first allows us to discover design better. Am I just being finicky, especially when our coverage is quite high? If the answer to this question is no, then how do I talk to people to have a test-first approach.
EDIT: I think writing tests after writing code is an easier thing to do. People in my team have got accustomed to do this and are opposing any change.

I don't know that there is a whole lot you can tell people to convince them of the value of TDD. You can cite what the experts have told us about it, and your own personal experiences, but if folks are not willing to give it a try, chances are low that you sharing this information with them will help.
My experience with TDD was basically that it sounded like a really good idea, but it never really worked out the way it was supposed to. Then one day I tried it again on a new task and ended up with a solution to the problem that was simpler than what I would have thought possible, due entirely to the fact that I had used TDD. I think when developers have this sort of experience it changes the way they look at things, and makes them more willing to try it in other situations.
The challenge is being able to demonstrate this to the other developers. One way you may be able to do this is with the use of a TDD Kata like this one from Roy Osherove (he uses it in his TDD Master Course). It is designed specifically to demonstrate the value in working in small steps, implementing only the code that is needed to make each test pass. This may show folks how the process works, and make them more comfortable with giving it a try.
There was also a coding exercise I heard about where you gave two groups/teams of developers a reasonably simple task, and asked one of the groups to use TDD, and make sure they followed the "simplest thing that could possibly work" rules, while the other team did things however they wanted. Then, once that is done, you have the teams switch tasks, but throw out the code written by each team, leaving only the tests. The teams are then supposed to recreate the code for the task. Typically you will find that the team who inherits the TDD code has a much easier time doing this.
Given all that, though, I think the best thing you can do personally is to start doing TDD yourself for as much of your work as possible. This has the potential to give you some very specific references for where and how TDD has proved to be beneficial within the context of the current project. In particular if you do code reviews your peers may notice the code that you are writing TDD is more concise, and easier to maintain than the code that has been writing without TDD. Your QA team may also notice a difference in the quality of the code, which is one of the things that you hear a lot about companies who move to TDD.

A couple suggestions. Your practicality may vary:
Win over one or two people: your boss, an intern, etc, over to your side first. Your first follower will make you a leader.
Start pair programming or mentoring. Even if its just with an intern or two, working closely with someone can be a good way to influence their style. If you are willing, you could try becoming a manager.
Give a technical presentation on the subject. Make the focus on the why and the problem you are solving, instead of TDD. You want people to buy into the problem rather than your specific solution. Include a couple other alternatives so it doesn't seem like you are just trying to push what works for you.
Get some outside training from Object Mentor or the like. Works best if you can convince your boss and the team isn't a bunch of hardened soulless cynics.

To be honest, you should use always just use a development/test cycle that works.
A lot of people like TDD, and a lot of big players like Google have embraced it; because of the high test coverage.
However, it seems that you and your team tend to be doing pretty well without it—and remember, and change in development style decreases productivity at least temporarily. So remember the old adage, don't change what works.
However, if you and your customers are finding that there still are a lot of bugs that the tests don't cover, TDD is an ideal way to up that—so you should tell management that TDD is a way to increase customer satisfaction, thus make money. (That's management-speak for you!)

Perhaps Leading by example can help:
Start working like this yourself
Perhaps create a tutorial\script to setup the environment (the IDE) that will not add overhead to the TDD process:
Run the tests in a single keyboard shortcut
The GUI of the test system should be present in the development view (not just in the testing view, so you don't have to move between them)
I am guessing that after a while, people will be curious and ask you if this TDD thing really works, you should have a prepared answer for that question :-)

Have you come across BDD at all? There's an associated change in vocabulary which I find really helps newcomers to TDD pick it up. Here's the vocab change:
http://lizkeogh.com/2009/11/06/translating-tdd-to-bdd/
I've found that using this language helps people focus on why it's useful to write the tests (or examples) first. I translated another example in the comments.
Even then, sometimes it's helpful to learn how tests are structured. If people have trouble learning how to write them first, writing them afterwards is a good learning step. You're right about the design benefits. It can take a while to grok.
In the past I've found that the best way to get TDD is to have a safe environment to practice in. Having my own toy app or running / attending workshops based on a toy app have both helped me a lot.

Related

How much testing is enough? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”
Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.
So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?
In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).
Think, instead of blindly following one metric such as code coverage.
Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.
When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.
Good question!
Firstly - it sounds like your extensive integration testing paid off :)
From my personal experience:
If its a "green fields" new project,
I like to enforce strict unit testing
and have a thorough (as thorough as
possible) integration test plan
designed.
If its an existing piece of software
that has poor test coverage, then I
prefer to design a set integration
tests that test specific/known
functionality. I then introduce
tests (unit/integration) as I
progress further with the code base.
How much is enough? Tough question - I dont think that there ever can be enough!
"Too much of everything is just enough."
I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.
I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.
Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.
How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.
When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.
Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.
When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.
So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.
If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.
You will probably never find 100% of your bugs.
I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build
Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.
As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.
Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.
I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.
My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.
That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.
Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.
This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.
Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.
I test Everything. I hate it, but it's an important part of my work.
I worked in QA for 1.5 years before becoming a developer.
You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).
As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.
Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.
It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

How do you persuade others to write unit tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been test-infected for a long time now, but it would seem the majority of developers I work with either have never tried it or dismiss it for one reason or another, with arguments typically being that it adds overhead to development or they don't need to bother.
What bothers me most about this is that when I come along to make changes to their code, I have a hard time getting it under test as I have to apply refactorings to make it testable and sometimes end up having to do a lot of work just so that I can test the code I'm about to write.
What I want to know is, what arguments would you use to persuade other developers to start writing unit tests? Most developers I've introduced to it take to it quite well, see the benefits and continue to use it. This always seems to be the good developers though, who are already interested in improving the quality of their code and hence can see how unit testing does this.
How do persuade the rest of the motley crew? I'm not looking for a list of testing benefits as I already know what these are, but what techniques you have used or would use to get other people on board. Tips on how to persuade management to take an active role are appreciated as well
There's more than one side to that question, I guess. I find that actually convincing developers to starting using tests is not that hard, because the list of advantages of using testing often speaks for itself. When that said, it is quite a barrier to actually get going and I find that the learning curve often is a bit steep – especially for novice coders. Throwing testing frameworks, TDD test-first mentality, and mocking framework at someone who's not yet comfortable with neither C#, .Net or programming in general, could be just too much to handle.
I work as a consultant and therefore I often have to address the problem of implementing TDD in an organization. Luckily enough, when companies hire me it is often because of my expertise in certain areas, and therefore I might have a little advantage when it comes to getting people’s attention. Or maybe it's just that it's a bit easier to for me as an outsider to come in to a new team and say "Hi! I've tried TDD on other projects and I know that it works!" Or maybe it's my persuasiveness/stubbornness? :) Either way, I often don't find it very hard to convince devs to start writing tests. What I find hard though, is to teach them how to write good unit tests. And as you point out in your question; to stay on the righteous path.
But I have found one method that I think works pretty well when it comes to teaching unit testing. I've blogged about it here, but the essence is to sit down and do some pair-programming. And doing the pair programming I start out writing the unit test first. This way I show them a bit how the testing framework work, how I structure the tests and often some use of mocking. Unit tests should be simple, so all in all the test should be fairly easy to understand even for junior devs. The worst part to explain is often the mocking, but using easy-to-read mocking frameworks like Moq helps a lot. Then when the test is written (and nothing compiles or passes) I hand over the keyboard to my fellow coder so that (s)he can implement the functionality. I simply tell her/him; "Make it go green!” Then we move on to the next test; I write the test, the 'soon-to-be-test-infected-dev' next to me writes the functionality.
Now, it's important to understand that at this point the dev(s) you are teaching are probably not yet convinced that this is the right way to code. The point where most devs seem to see the (green) light is when a test fails due to some code changes that they never thought would break any functionality. When the test that covers that functionality blows up, that's when you've got yourself a loyal TDD'er on your team. Or that's at least my experience, but as always; your mileage will vary :)
Quality speaks for itself. If you're more successful than everyone else, that's all you need to say.
Use a test-coverage tool. Make it very visible. That way everybody can easily see how much code in each area is passed, failed and untested.
Then you may be able to start a culture where "untested" is a sign of bad coding, "failed" is a sign of work in progress and "passed" is a sign of finished code.
This works best if you also do "test-first". Then "untested" becomes "you forgot step 1".
Of course you don't need 100% test coverage. But of one area has 1% coverage and another has 30%, you have a metric for which area is most likely to fail in production.
Lead by example. If you can get evidence that there are less regression on unit tested code that elsewhere.
Getting QA and management buy-in so that your process mandates unit testing.
Be ready to help others to get started with unit testing: provide assistance, supply a framework so that they can start easily, run an introductory presentation.
You just have to get used to the mantra "if it ain't tested, the work ain't done!"
Edit: To add some more meat to my facetious comment above, how can someone know if they're actually finished if they haven't tested their work?
Mind you, you will have a battle convincing others if time isn't allowed in the estimate for the testing of the devleoped code.
A one-to-one split for between effort for coding and that for testing seems to be a good number.
HTH
cheers,
Rob
Give compliments for one writes more test and produce good results and show the best one to others and ask them to produce the same or better result than this.
People (and processes) don't change without one or more pain points. So you need to finjd the significant pain points and demonstrate how unit testing might help deal with them.
If you can't find any significant pain points, then unit testing may not add a lot of value to your current process.
As Steve Lott implies, delivering better results than the other team members will also help. But without the pain points, my experience is that people won't change.
Two ways: convince the project manager that unit testing improves quality AND saves time overall, then have him make unit tests mandatory.
Or wait for a development crunch just before an important release date, where everyone has to work overtime and weekends to finish the last features and eliminate the last bugs, only to find they've just introduced more bugs. Then point out that with proper unit tests they wouldn't have to work like that.
Another situation where unit tests can be shown as indispensable is when a release was actually delivered and turns out to contain a serious bug due to last-minute changes.
If developers are seeing that the "successful" developers are writing unit tests, and they are still not doing it then I suggest unit tests should become part of the formal development life-cycle.
E.g. nothing can be checked in until a unit test is written and reviewed.
Probably reefnet_alex' answer helps you:
Is Unit Testing worth the effort?
I think it was Fowler who said:
"Imperfect tests, run frequently, are
much better than perfect tests that
are never written at all". I
interprate this as giving me
permission to write tests where I
think they'll be most useful even if
the rest of my code coverage is
woefully incomplete.
You mentioned that your manager is on board with unit tests. If that's the case, then why isn't he (she) enforcing it? It isn't your job to get everybody else to follow along or to teach them and in fact, other developers will often resent you if you try to push it on them. In order to get your fellow developers to write unit tests, the manager has to emphasize it strongly. It might end up that part of that emphasis is education on unit test implementation which you might end up being the educator and that's great, but management of it is everything.
If you're in an environment where the group decides the style of implementation, then you have more of a say in how the group dynamic should be. If you are in that sort of environment and the group doesn't want to emphasize unit tests while you do, then maybe you're in the wrong group/company.
I have found that "evangelizing" or preaching rarely works. As others have said, do it your way for your own code, make it known that you do it, but don't try to force others to do it. If people ask about it be supportive and helpful. Offer to do a few lunch-time seminars or informal dog and pony shows. That will do a lot more than just complaining to your manager or the other developers that you have a hard time writing tests for code they wrote.
Slow and steady - it is not going to change overnight.
Once I realized that at one place where I worked the acceptance for peer reviews improved tremendously. My group just did it and stopped trying to get others to do it. Eventually people started asking s about how we got some of the success we did. Then it was easier.
We have a test framework which includes automated running of the test suite whenever anyone commits a change. If someone commits code that fails the tests, the whole team gets emailed with the errors.
This leads to introduced bugs being fixed pretty quickly.

Automated Testing: ways to help and educate developers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a software test engineer embedded in a development team. A large part of my job involves checking over the state of the project's automated tests (mainly unit/integration tests).
I'm not a short-sighted zealot who wants to force testing down everyone's throats, but I do want to help everyone to get the best out of the time they spend writing tests. A lot of time is spent every week writing tests, so it is important to maximise the returns.
Right now, I do a few things to try and help. Firstly, I always make myself available to talk about testability concerns. E.g. try to identity a testing strategy, whether a particular design is testable and so forth.
In addition to explaining things to people and generally trying to help them out, I also review the finished code and the tests that they write (I have to sign off on stories, meaning that I am somewhat adversarial, too).
My current process is to sit down alone, work through their code and bookmark & comment all problem areas, places that things can be improved and the reason for it. I then get the developer around to my PC and talk through all of the review points. I then send them a decent write up so they have a record of it and they have easy reference.
I do not fix their code and tests for them, but I will add more test cases etc. if I spot gaps. The reason I have decided not to fix up the tests for them is that it's too easy for developers to say "thanks" but to tune out. My reasoning is that if they have to fix the problems I identified before I will sign off, it will lead to a better standard of testing on the project (i.e. more self-sufficient developer testing).
My question is: When it comes to aiding the team, could I be doing anything better? What approaches have you found that can be beneficial?
I'd particularly like to hear from people holding similar positions who have faced the same challenges (e.g. helping improve the quality of the testing, demonstrating the value testing can bring in relevant situations and also striking a good balance between being supportive and adversarial.)
*edit:
Thanks for the answers; all of them contained useful suggestions. I marked the top one as the best answer as I guess it comes down to developer support, and pair programming is something I have not yet tried (short of a few impromptu 'here's how I'd do this' demonstrations after the tests had been written). I'll give that a go with anyone who struggles with testing something :) Cheers.
If you have certain people that tend to be weak at testing, then sit down with them, pair programming, sort of, and as they work on their code, you can help them see how they may test it.
After a while these people should get better at unit testing, and your work load on this should decrease.
The other thing is that everyone should be looking at tests. If I touch a function, make any change, then I should be checking on the tests to make certain they are complete. If there is a problem I can discuss it with the developer.
You should also enlist the work of the team lead, as that is part of his responsibility, or should be, to ensure that everyone understands how to write tests well.
A few things I'd do:
Get them to run coverage and spot any missed areas of code and highlight how although they think they've got all the cases covered, they might not have. I've done this with a few people and they always seem quite surprised at areas they've missed when they thought they'd written watertight tests
Start a "recipe" page on your local Wiki. Every time someone comes up with a testing scenario that they can't figure out, or need your help with, stick it on the Wiki and make it easy to find. Get other people to contribute as well
It sounds like you're already doing this anyway, but ensure when anyone has a testing related question, make yourself available even if it's to the detriment of your normal workload. If you're passionate about it, it should inspire those who are interested to do the right thing too.
When I'm introducing someone to testing (or a new testing technique), I'll often spend alot of my time randomly wandering over to their workstation just to see how they're getting on and nudge them in the right direction. This can be fitted in quite nicely when going for tea/smoke breaks or when you're doing a build. I've had quite good feedback about this but YMMV.
Depending on the size of the team, I wonder if it may make sense after an initial review of the code, to pull someone else to be another set of eyes that can look through what changes you'd propose and act as a way to show that this isn't just your opinion on it. This could work as a way to highlight where there may be some tension in terms of what changes you'd like to see that a developer may reply, "Oh, that'll take weeks and likely isn't worth it..." or something similar if what you'd like to change isn't that simple.
In a similar vein, how does most of the team view testing? Are there leaders or those highly respected that have a positive view on it and help foster a positive attitude towards it? Is there general documentation about the testing guidelines that may help those new to the team to get up to speed quickly? These are just a few other areas I'd examine since sometimes tests can be a great thing and sometimes they can be a pain. Much like the glass that is half-empty or half-full depending on how you want to see it.
Not that I've had the same position, but as someone that has been a developer for a while, this is just what I'd like to see to help make testing be a good thing, as Martha Stewart would say.
One way to gently ease the team into getting tests started is to initiate the practice of writing tests when bugs are being fixed. So when a bug comes in, the first thing to do is write a test that will fail because the of the bug, fix the bug and then get the test to pass.
This approach can also be done when code gets modified internally (no public API changes) - write tests to cover the area being modified to ensure that it doesn't get broken by the code changes. Writing tests this way is a lot less work and clearly demonstrates the benefits once the developer catches their first regression bug.

Is it feasible to introduce Test Driven Development (TDD) in a mature project? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it.
Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing.
Say we have new feature requests, and new bug reports (!). So good deal of development still goes on.
Note there would already be plenty of business object with no or little unit testing.
Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se.
Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips.
How can we introduce TDD at this stage?
Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.).
Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product.
Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable.
Se also Michael Feathers excellent book Working effectively with legacy code, its a must read for anyone thinking of introducing TDD into a legacy code base.
I think its completely feasible to introduce TDD into an existing application, in fact I have recently done it myself.
It is easiest to code new functionality in a TDD way and restructuring the existing code to accommodate this. This way you start of with a small section of your code tested but the effects start to spread through the whole code base.
If you've got a bug, then write a unit test to reproduce it, refactoring the code as necessary (unless the effort is really not worth it).
Personally, I don't think there's any need to go crazy and try and retrofit tests into the existing system as that can be very tedious without a great amount of benefit.
In summary, start small and your project will become more and more test infected.
Yes you can. From your description the project is in a good shape - solid amount of functional tests automation is a way to go! In some aspects its even more useful than unit testing. Remember that TDD != unit testing, it's all about short iterations and solid acceptance criteria.
Please remember that having an existing and accepted project actually makes testing easier - working application is the best requirements specification. So you're in a better position than someone who just have a scrap of paper to work with.
Just start working on your new requirements/bug fixes with an TDD. Remember that there will be an overhead associated with switching the methodology (make sure your clients are aware of this!) and probably expect a good deal of reluctance from the team members who are used to the 'good old ways'.
Don't touch the old things unless you need to. If you will have an enhancement request which will affect existing stuff then factor in extra time for doing the extra set-up things.
Personally I don't see much value in introducing a complex infrastructure for mock-ups - surely there is a way to achieve the same results in a lightweight mode but it obviously depends on your circumstances
One tool that can help you testing legacy code (assuming you can't\won't have the time to refactor it, is Typemock Isolator: Typemock.com
It allows injecting dependencies into existing code without needing to extract interfaces and such because it does not use standard reflection techniques (dynamic proxy etc..) but uses the profiler APIs instead.
It's been used to test apps that rely on sharepoint, HTTPContext and other problematic areas.
I recommend you take a look.
(I work as a dev in that company, but it is the only tool that does not force you to refactor existing legacy code, saving you time and money)
I would also highly recommend "Working effectively with legacy code" for more techniques.
Roy
Yes you can. Don't do it all at once, but introduce just what you need to test a module whenever you touch it.
You can also start with more high level acceptance tests and work your way down from there (take a look at Fitnesse for this).
I would start with some basic integration tests. This will get buy-in from the rest of the staff. Then start to separate the parts of your code which have dependencies. Work towards using Dependency Injection as it will make your code much more testable. Treat bugs as an opportunity to write testable code.