Automated Testing: ways to help and educate developers? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a software test engineer embedded in a development team. A large part of my job involves checking over the state of the project's automated tests (mainly unit/integration tests).
I'm not a short-sighted zealot who wants to force testing down everyone's throats, but I do want to help everyone to get the best out of the time they spend writing tests. A lot of time is spent every week writing tests, so it is important to maximise the returns.
Right now, I do a few things to try and help. Firstly, I always make myself available to talk about testability concerns. E.g. try to identity a testing strategy, whether a particular design is testable and so forth.
In addition to explaining things to people and generally trying to help them out, I also review the finished code and the tests that they write (I have to sign off on stories, meaning that I am somewhat adversarial, too).
My current process is to sit down alone, work through their code and bookmark & comment all problem areas, places that things can be improved and the reason for it. I then get the developer around to my PC and talk through all of the review points. I then send them a decent write up so they have a record of it and they have easy reference.
I do not fix their code and tests for them, but I will add more test cases etc. if I spot gaps. The reason I have decided not to fix up the tests for them is that it's too easy for developers to say "thanks" but to tune out. My reasoning is that if they have to fix the problems I identified before I will sign off, it will lead to a better standard of testing on the project (i.e. more self-sufficient developer testing).
My question is: When it comes to aiding the team, could I be doing anything better? What approaches have you found that can be beneficial?
I'd particularly like to hear from people holding similar positions who have faced the same challenges (e.g. helping improve the quality of the testing, demonstrating the value testing can bring in relevant situations and also striking a good balance between being supportive and adversarial.)
*edit:
Thanks for the answers; all of them contained useful suggestions. I marked the top one as the best answer as I guess it comes down to developer support, and pair programming is something I have not yet tried (short of a few impromptu 'here's how I'd do this' demonstrations after the tests had been written). I'll give that a go with anyone who struggles with testing something :) Cheers.

If you have certain people that tend to be weak at testing, then sit down with them, pair programming, sort of, and as they work on their code, you can help them see how they may test it.
After a while these people should get better at unit testing, and your work load on this should decrease.
The other thing is that everyone should be looking at tests. If I touch a function, make any change, then I should be checking on the tests to make certain they are complete. If there is a problem I can discuss it with the developer.
You should also enlist the work of the team lead, as that is part of his responsibility, or should be, to ensure that everyone understands how to write tests well.

A few things I'd do:
Get them to run coverage and spot any missed areas of code and highlight how although they think they've got all the cases covered, they might not have. I've done this with a few people and they always seem quite surprised at areas they've missed when they thought they'd written watertight tests
Start a "recipe" page on your local Wiki. Every time someone comes up with a testing scenario that they can't figure out, or need your help with, stick it on the Wiki and make it easy to find. Get other people to contribute as well
It sounds like you're already doing this anyway, but ensure when anyone has a testing related question, make yourself available even if it's to the detriment of your normal workload. If you're passionate about it, it should inspire those who are interested to do the right thing too.
When I'm introducing someone to testing (or a new testing technique), I'll often spend alot of my time randomly wandering over to their workstation just to see how they're getting on and nudge them in the right direction. This can be fitted in quite nicely when going for tea/smoke breaks or when you're doing a build. I've had quite good feedback about this but YMMV.

Depending on the size of the team, I wonder if it may make sense after an initial review of the code, to pull someone else to be another set of eyes that can look through what changes you'd propose and act as a way to show that this isn't just your opinion on it. This could work as a way to highlight where there may be some tension in terms of what changes you'd like to see that a developer may reply, "Oh, that'll take weeks and likely isn't worth it..." or something similar if what you'd like to change isn't that simple.
In a similar vein, how does most of the team view testing? Are there leaders or those highly respected that have a positive view on it and help foster a positive attitude towards it? Is there general documentation about the testing guidelines that may help those new to the team to get up to speed quickly? These are just a few other areas I'd examine since sometimes tests can be a great thing and sometimes they can be a pain. Much like the glass that is half-empty or half-full depending on how you want to see it.
Not that I've had the same position, but as someone that has been a developer for a while, this is just what I'd like to see to help make testing be a good thing, as Martha Stewart would say.

One way to gently ease the team into getting tests started is to initiate the practice of writing tests when bugs are being fixed. So when a bug comes in, the first thing to do is write a test that will fail because the of the bug, fix the bug and then get the test to pass.
This approach can also be done when code gets modified internally (no public API changes) - write tests to cover the area being modified to ensure that it doesn't get broken by the code changes. Writing tests this way is a lot less work and clearly demonstrates the benefits once the developer catches their first regression bug.

Related

Our code sucks and I'm powerless to fix it. Help! [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our code sucks. Actually, let me clarify that. Our old code sucks. It's difficult to debug and is full of abstractions that few people understand or even remember. Just yesterday I spent an hour debugging in an area that I've worked for over a year and found myself thinking, "Wow, this is really painful." It's not anyone's fault - I'm sure it all made perfect sense initially. The worst part is usually It Just Works...provided you don't ask it to do anything outside of its comfort zone.
Our new code is pretty good. I think we're doing a lot of good things there. It's clear, consistent, and (hopefully) maintainable. We've got a Hudson server running for continuous integration and we have the beginnings of a unit test suite in place. The problem is our management is laser-focused on writing New Code. There's no time to give Old Code (or even old New Code) the TLC it so desperately needs. At any given moment our scrum backlog (for six developers) has about 140 items and around a dozen defects. And those numbers aren't changing much. We're adding things as fast as we can burn them down.
So what can I do to avoid the headaches of marathon debugging sessions mired in the depths of Old Code? Every sprint is filled to the brim with new development and showstopper defects. Specifically...
What can I do to help maintenance and refactoring tasks get high enough priority to be worked?
Are there any C++-specific strategies you employ to help prevent New Code from rotting so quickly?
Your management may be focused on getting working features into the product, and keeping them working. In this case, you will need to make a business case for refactoring the old stuff, in that by X investment of time and effort you can reduce necessary maintenance time by Y over period Z. Or your management may be fundamentally clueless (this happens, but less often than most developers seem to think), in which case you'll never get permission.
You need to see the business point of view. It doesn't matter to the end user whether the code is ugly or elegant, only what the software does. The cost of bad code is potential unreliability and additional difficulty in changing it; the emotional distress it causes to the programmer is rarely considered.
If you can't get permission to go in and refactor, you can always try it on your own, a little bit at a time. Whenever you fix a bug, do a little rewriting to make things clearer. This may turn out to be faster than the minimum possible fix, particularly in verifying that the code now works. Even if it isn't, it's usually possible to take a little more time on a bug fix without getting into trouble. Just don't get carried away.
If you can leave the code just a little better each time you go in, you'll feel a lot better about it.
Stand Up Meetings
I might go to my mechanic, and we have a little stand-up meeting in the morning:
I tell him I want my wheels aligned,
my tires rotated, and my oil changed.
I mention that "Oh by the way my
brakes felt a little soft on the way
in. Could [he] take a look at them?
How soon can I get my car back because
I need to get back to work?"
He pops his head under my car, pops
back up and says my brakes are leaking
oil and starting to fail. He will need
a part that will arrive at 10:30am.
His man won't finish before lunch, but
I should get my car back by 1:30pm or
so. He's booked solid so he won't be
able to do any of the other stuff
today, and I will have to book another
appointment.
I ask if he can do the other stuff and
I come back for the brake. He tells me
he really can't let me drive out of
there without fixing the brakes
because they might cause an accident,
but if I want to go to another
mechanic, he can call for a tow.
Since the car will be done so shortly
after lunch, I ask if his man can take
a late lunch so I can get my car back
an hour earlier.
He tells me his men come in at 8am and
often work into the evening.
They earn every break they
get, and his man deserves to take his
lunch with everyone else.
None of that is what I wanted to hear. I wanted to hear that I would drive out of there in a half hour with my wheels, tires and oil done.
My mechanic was just straight up and honest with me. Are you straight up and honest with your management? Or do you avoid telling them things they don't want to hear?
Unit Testing
I wouldn't touch a line of code I didn't understand, and I wouldn't check in a new line of code I didn't test thoroughly. (At least, not intentionally.)
Your question seems to imply that somehow a large corpus of poorly documented code made it past review without any unit tests. Maybe you participated in that, and maybe you didn't. Everyone involved needs to accept responsibility for that--including management. Regardless, what's done is done. You cannot go back and change it.
However, right now, in the present time, it is everybody's responsibility to stop the behavior that led to the problem in the first place. You say you spent a year working in code that you find difficult to understand and that has no unit tests. During that year, as you worked hard to improve your understanding, how many unit tests did you write to document and to verify that understanding?
As you struggled through the code slowly gaining understanding, how many comments did you add so you wouldn't have to struggle next time?
Scrum Backlog
Personally, I think the term "Scrum backlog" is a misnomer. A list of things to do is just a list--a shopping list if you will. I had a list when I went to the mechanic. My stand up meeting with the mechanic was really more of a sprint planning meeting.
A sprint planning meeting is a negotiation. If your management is time boxing without that negotiation, they aren't managing anything. They are simply trying to cram 10 lbs of shit into a 5 lb sack, and it's your responsibility to tell them so.
When you show up to a sprint planning meeting, you are expected to commit to a body of work, and it's your responsibility to prepare for that. Preparation means having some idea of what you will have to do to complete each item on the list--including the time it takes to understand obscure code and the time it takes to write unit tests.
If someone invites you to a planning meeting where you won't have time to prepare, decline the meeting and suggest when to reschedule so you will have time.
If you have an existing body of code with no unit tests and a feature might conceivably affect the operation of that code, you need to write unit tests for as much of the old code as might be affected. When you commit to writing the feature, you are committing to doing that work. If that leaves you too little time to commit to some other feature, just say so. Don't commit to the other feature.
When you commit to fix a defect, you commit to testing your work. Obviously, that means writing a unit test for the defect. But if it involves old code with no unit tests, it also means writing unit tests for things that aren't broken yet, but might break due to your change. How else will you test the fix?
If your defect list remains a constant size, your team regresses as much as it fixes. Politely explain to whomever needs to understand that unit tests prevent the regressions that currently keep your defect list from shrinking.
If you fail to write those unit tests because you commit to too many features, whose responsibility is that?
Refactoring
When you refactor code, you have to test all of it, and that means writing unit tests for all of it. If you have a large body of code with no unit tests, you will have to write all of those unit tests before you refactor.
I suggest you hold off on refactoring until those unit tests are in place. In the meantime, if you insist on including unit tests in your estimates for the work you commit to, eventually all those unit tests will be there. And then you can refactor.
The one exception to that is refactoring for testability. You may find that some of the code was not designed for test and that you have to refactor for things like dependency injection before you can create your unit tests. When you commit to writing the feature that requires the unit test, you commit to making the code testable. Include that in your estimate when you commit to the feature.
Commitment + Responsibility = Power
You say you are powerless. When you accept responsibility and commit to doing what needs doing, I think you will find you have all the power you need.
P.S. If anyone complains about anybody "wasting time" writing multiple unit tests when fixing a single defect, show them this video on the 80:20 rule and pound "defect clusters" into their brains.
It is hard to tell much from the information you give. Some questions I would have is a logical reason to be writing new code is to replace the old code. If that is what you are doing, abandon the old code.
Is it also old code that has showstopper defects? If so where are they coming from? Old code does not have "showstopper" defects, it just grinds closer and closer to a halt usually. It is old code after all - it should have the same old defects and the same old limitations, not stuff that has to be looked at right away. Showstopper defects are new code defects. It sounds like there is active development on in the old code.
If you are writing all this new code on top of old code that sucks, with no plans to fix it once and for all, sorry, there is only so much you can do when you are too busy burying yourself to dig yourself out.
If the latter is the case. you should recognize where you are headed, and try to detach a little. It is going to all collapse eventually, if you plan on being around save your strength for a worthwhile battle.
In the meantime try to pick up some design patterns. There are several that can at least help shield you new code from the old stuff, but still, ultimately it is just hard to write good code against bad code.
And your sprints sound maybe confused. Is there not an overall direction? That should determine how much backlog you have, although things can change month to month, is there not a clear sense of moving towards some final goal?
And new code rotting? The way you prevent that is you have a meaningful design, a meaningful direction, and a quality team that is committed to both the quality of their work and the vision of the design. If you have that, discipline is what maintains quality. If you don't have that sorry, you basically were writing code with no purpose already. It was basically rotten on the vine.
Not being critical, just trying to be honest. Take a deep breath. Slow down. You seem like you need it. Look at what you have written here. It tells nothing. You talk of refactor, scrums, showstoppers, defects, old code, new code. What does any of that mean? It is all jumbled up.
What about "new initiatives versus legacy systems"? "Need to refactor early sprint cycle code in terms of latest understanding etc." Are showstoppers in fact "Early components of the current enterprise initiatives have been released but are experiencing problems and no time is budgeted because of new development".
These would be meaningful concepts. You've given us nothing. I understand it is intense. My sprints are crazy too, we add a lot of back;pg items because we could not get many requirements up front (a lot of my new requirements result from having to also contend with external regulatory bodies, the normal business process is not always available).
But at the same time I am ground down by the sheer magnitude of what has to be done and the time to do it. Everything that is added to my backlog needs to be there. It is crazy, but at the same time I have a very clear idea of where I have been, where I need to go, and why the road is getter harder.
Step back, clear your thoughts, figure out the same - where you have been and where you are going. Because if you know that, it sure is not obvious. If you cannot communicate anything your peers can understand, how far are you going to get with a business manager?
Old code always sucks. There's probably some rare exceptions written by people with names like Kernighan or Thompson but, for the typical "code written in an office" stuff, over time it's gonna stink. Developers get more experienced. Newer practices, such as continuous integration, change the game. Stuff get's forgotten. New maintainers fail to grasp designs and wish for re-writes. So best accept this as normal.
Some random things that might help...
Talk about it with your team. Share your experiences and your concerns, while avoiding "man your old code sucks" (for obvious reasons) and see what the consensus is. You're probably not alone.
Forget about your managers. Don't expose them to this level of detail - they don't need to think about new vs. old code and probably won't understand if they do. This is a problem for your team to tackle and, if necessary, to make your PO aware of
Be open to the possibility that you may be able to throw stuff out. Some of that old code probably relates to features that are no longer being used or failed to be adopted by users in the first place. To make this work for you, you really need to go a level higher and think in terms of where the code really delivers user or business value vs. where it's just a ball of mud that no one is brave enough to take a decision on. Who dares, wins.
Relax your view of architectural consistency. There's always a way to tap into a working system with new code somewhere, and that may allow you to slowly migrate to a newer, smarter approach, while preserving the old long enough not to break existing things.
Overall, winning in this kind of situation is less about coding skills and much more about smart choices and handling the human aspects.
Hope that helps.
I recommend keeping track of how many bugs and code changes involve your "old code" and present this to either your manager or to your fellow developers at your next team meeting. With this in hand it should be simple enough to convince them that more needs to be done to refactor your "old code" and bring it up to par with your "new code".
It would also be prudent to document the parts of your "old code" that are most difficult to understand. These would also be the parts of your "old code" that you should be refactoring first once you get the approval.
Something to try: group your class into - say - worst 10%, best 10%, and the rest. Deliver the lists to your management, saying, "I predict the majority of bugs over the next quarter will be found in the first set." Based on length, cyclomatic complexity, test coverage - whatever tools are handy and comfortable to you. Then sit back and watch - and be right. Now you've got some credibility, some leverage when you say, "I'd like to invest some resources in making our bad code better, to reduce bugs and maintenance costs - and I know where to invest that energy, see?"
You could create diagrams and sketches of how the new code works and how the classes and functions are related to one another. You could use FreeMind or maybe Dia. And I definitely agree with Documenting and commenting your code.
I once had a problem with this too. I wrote a font class for J2ME for my own language. It was awful for these reasons that maybe you might also see in your code.
No Comments or documentation
Less object oriented
bad variable / function names
...
But after a few months I was forced to write the whole thing again. Now I've learned to use meaningful variable names that are sometimes VERY long. write comments more than writing codes. And using diagrams for the project's classes and their relationships.
I don't know If it was a real answer but it definitely worked for me. and for old codes you might actually have to reread the whole thing and add comments when you remember the functionalities.
Hope it helped.
Talk to your Product Owner! Explain that time invested in refactoring the old code will bring him benefit of higher team velocity on new features once this obstacle is removed.
Other than the approaches mentioned above which are good, you can also try these:
For keeping future code clean
Try pair programming, at least for parts that make sense. It's an effective way of getting reviewed, refactored code a practice.
Try to get refactoring onto the definition of "done". Then it will be part of the estimation process and allotted accordingly. So the definition of done might include: coded, unit tested, functionally tested, performance tested, code reviewed, refactored, and integrated (or something like this).
For Cleaning up the old code:
Unit tests are great for helping you refactor and figure out how things work.
I agree with the comments that a business case needs to be made for large-scale refactoring. But, small-scale refactoring could be easily included in the estimate and will provide immediate return. i.e.: I spend 2 hours rewriting a piece but I would have spent that time looking for bugs anyway.
You may also want to consider getting the product owner and scrummaster to capture a separate velocity for the old code vs the new code, and use that accordingly.
If there's a desired new feature and you can delineate a non-overwhelming hunk of code that is in the way, then you might be able to get management's blessing to replace the old code with new code that has the desired new feature. When I did this, I had to write a somewhat ugly shim layer to meet the old interfaces of the part of the software I wasn't going to touch. And a test harness that could exercise the existing code and exercise the new code to make sure the new code, as seen through the shim layer, could fool the rest of the application into thinking nothing had changed. By reworking the portion we reworked, we were able to show huge performance benefits, compatibility with desired new hardware, reduction in each of our field site's needs for expertise in administering space for the application - and the new code was much more maintainable. That last point mattered not a whit to the users, but the other advantages from the rework were attractive enough to "sell" the users on the merits of a somewhat painful database conversion.
Another more modest success story: we had a decent trouble tracking system that had literally years of history. There was a subsystem of our application that was famed for the speed with which it would burn out maintenance programmers. Clearly (well, clearly in my mind) it was in need of a major re-write, but management wasn't enthused about that. We were able to dig through the history in the trouble tracking data to show the staffing level that had gone into maintaining this module, and for all that effort, the trouble tickets per month against that module continued to arrive at a constant rate. When faced with actual data like that, even the reluctant managers who had long been tight-fisted about staffing re-work of that subsystem could see the merit of assigning staff to rework that module.
The approach as before was to leave the input and output of that module alone. The good news was that throwing virtual memory at the new code with its fancy new data structures did give a noticeable performance improvement to the module. The bad news is that we were nearly done with the re-implementation before we really understood what was wrong in the original implementation such that it did work most of the time, but managed to fail on some of the transactions on some days. The first cut faithfully reproduced those bugs, but the bugs were easier to understand in the reworked code so we now had a shot at really fixing the real problem. In retrospect, maybe we'd have been smarter to have captured data that produced the problems and have taken better care to make sure the reworked version didn't reproduce that problem. But, the truth is, nobody understood the problem until we were quite far along on the re-write. So, the re-write gave improved performance to the users and improved understanding to the current programmers, such that the real problem could really be resolved at last.
A fail example: There was yet another incredibly ugly module that persistently was a sore spot. Alas, I wasn't clever enough to be able to understand the defacto interfaces to this particular wretched hive of scum and villainy, at least not in the time frame of the nominal release schedule. I'd like to believe that given more time we could have figured out a suitable plan for re-working that piece of the system too, and maybe once we understood it, we could even identify user-desired improvements that we could fit into the re-write. But I can't promise that you'll find a prize in every box. If the box is entirely obscure to you, slicing away a chunk of it and replacing that piece with clean code is hard to do. The guy who had charge of that module is probably the one who was best positioned to figure out a plan of attack, but he saw the frequent crashes and calls from the field for assistance as "job security". I don't think management ever really recognized that he needed to be eased aside for someone with a hunger for change, but that's what probably was needed.
Drew

How do I convince programmers in my team to do TDD? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am aware of this question: https://stackoverflow.com/questions/428691/how-to-encourage-implementation-of-tdd
In my team, we write a lot of unit tests. But, in general the programmers tend to write unit tests after wrting the code. So, we first finish the module functionality and then write tests. Our coverage is around 70% for most modules. I have tried convincing my technical manager and my team members to do pure TDD wherein we first write tests and then the code, but invain. I think writing tests first allows us to discover design better. Am I just being finicky, especially when our coverage is quite high? If the answer to this question is no, then how do I talk to people to have a test-first approach.
EDIT: I think writing tests after writing code is an easier thing to do. People in my team have got accustomed to do this and are opposing any change.
I don't know that there is a whole lot you can tell people to convince them of the value of TDD. You can cite what the experts have told us about it, and your own personal experiences, but if folks are not willing to give it a try, chances are low that you sharing this information with them will help.
My experience with TDD was basically that it sounded like a really good idea, but it never really worked out the way it was supposed to. Then one day I tried it again on a new task and ended up with a solution to the problem that was simpler than what I would have thought possible, due entirely to the fact that I had used TDD. I think when developers have this sort of experience it changes the way they look at things, and makes them more willing to try it in other situations.
The challenge is being able to demonstrate this to the other developers. One way you may be able to do this is with the use of a TDD Kata like this one from Roy Osherove (he uses it in his TDD Master Course). It is designed specifically to demonstrate the value in working in small steps, implementing only the code that is needed to make each test pass. This may show folks how the process works, and make them more comfortable with giving it a try.
There was also a coding exercise I heard about where you gave two groups/teams of developers a reasonably simple task, and asked one of the groups to use TDD, and make sure they followed the "simplest thing that could possibly work" rules, while the other team did things however they wanted. Then, once that is done, you have the teams switch tasks, but throw out the code written by each team, leaving only the tests. The teams are then supposed to recreate the code for the task. Typically you will find that the team who inherits the TDD code has a much easier time doing this.
Given all that, though, I think the best thing you can do personally is to start doing TDD yourself for as much of your work as possible. This has the potential to give you some very specific references for where and how TDD has proved to be beneficial within the context of the current project. In particular if you do code reviews your peers may notice the code that you are writing TDD is more concise, and easier to maintain than the code that has been writing without TDD. Your QA team may also notice a difference in the quality of the code, which is one of the things that you hear a lot about companies who move to TDD.
A couple suggestions. Your practicality may vary:
Win over one or two people: your boss, an intern, etc, over to your side first. Your first follower will make you a leader.
Start pair programming or mentoring. Even if its just with an intern or two, working closely with someone can be a good way to influence their style. If you are willing, you could try becoming a manager.
Give a technical presentation on the subject. Make the focus on the why and the problem you are solving, instead of TDD. You want people to buy into the problem rather than your specific solution. Include a couple other alternatives so it doesn't seem like you are just trying to push what works for you.
Get some outside training from Object Mentor or the like. Works best if you can convince your boss and the team isn't a bunch of hardened soulless cynics.
To be honest, you should use always just use a development/test cycle that works.
A lot of people like TDD, and a lot of big players like Google have embraced it; because of the high test coverage.
However, it seems that you and your team tend to be doing pretty well without it—and remember, and change in development style decreases productivity at least temporarily. So remember the old adage, don't change what works.
However, if you and your customers are finding that there still are a lot of bugs that the tests don't cover, TDD is an ideal way to up that—so you should tell management that TDD is a way to increase customer satisfaction, thus make money. (That's management-speak for you!)
Perhaps Leading by example can help:
Start working like this yourself
Perhaps create a tutorial\script to setup the environment (the IDE) that will not add overhead to the TDD process:
Run the tests in a single keyboard shortcut
The GUI of the test system should be present in the development view (not just in the testing view, so you don't have to move between them)
I am guessing that after a while, people will be curious and ask you if this TDD thing really works, you should have a prepared answer for that question :-)
Have you come across BDD at all? There's an associated change in vocabulary which I find really helps newcomers to TDD pick it up. Here's the vocab change:
http://lizkeogh.com/2009/11/06/translating-tdd-to-bdd/
I've found that using this language helps people focus on why it's useful to write the tests (or examples) first. I translated another example in the comments.
Even then, sometimes it's helpful to learn how tests are structured. If people have trouble learning how to write them first, writing them afterwards is a good learning step. You're right about the design benefits. It can take a while to grok.
In the past I've found that the best way to get TDD is to have a safe environment to practice in. Having my own toy app or running / attending workshops based on a toy app have both helped me a lot.

How much testing is enough? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”
Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.
So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?
In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).
Think, instead of blindly following one metric such as code coverage.
Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.
When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.
Good question!
Firstly - it sounds like your extensive integration testing paid off :)
From my personal experience:
If its a "green fields" new project,
I like to enforce strict unit testing
and have a thorough (as thorough as
possible) integration test plan
designed.
If its an existing piece of software
that has poor test coverage, then I
prefer to design a set integration
tests that test specific/known
functionality. I then introduce
tests (unit/integration) as I
progress further with the code base.
How much is enough? Tough question - I dont think that there ever can be enough!
"Too much of everything is just enough."
I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.
I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.
Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.
How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.
When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.
Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.
When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.
So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.
If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.
You will probably never find 100% of your bugs.
I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build
Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.
As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.
Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.
I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.
My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.
That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.
Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.
This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.
Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.
I test Everything. I hate it, but it's an important part of my work.
I worked in QA for 1.5 years before becoming a developer.
You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).
As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.
Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.
It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.

How do you persuade others to write unit tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been test-infected for a long time now, but it would seem the majority of developers I work with either have never tried it or dismiss it for one reason or another, with arguments typically being that it adds overhead to development or they don't need to bother.
What bothers me most about this is that when I come along to make changes to their code, I have a hard time getting it under test as I have to apply refactorings to make it testable and sometimes end up having to do a lot of work just so that I can test the code I'm about to write.
What I want to know is, what arguments would you use to persuade other developers to start writing unit tests? Most developers I've introduced to it take to it quite well, see the benefits and continue to use it. This always seems to be the good developers though, who are already interested in improving the quality of their code and hence can see how unit testing does this.
How do persuade the rest of the motley crew? I'm not looking for a list of testing benefits as I already know what these are, but what techniques you have used or would use to get other people on board. Tips on how to persuade management to take an active role are appreciated as well
There's more than one side to that question, I guess. I find that actually convincing developers to starting using tests is not that hard, because the list of advantages of using testing often speaks for itself. When that said, it is quite a barrier to actually get going and I find that the learning curve often is a bit steep – especially for novice coders. Throwing testing frameworks, TDD test-first mentality, and mocking framework at someone who's not yet comfortable with neither C#, .Net or programming in general, could be just too much to handle.
I work as a consultant and therefore I often have to address the problem of implementing TDD in an organization. Luckily enough, when companies hire me it is often because of my expertise in certain areas, and therefore I might have a little advantage when it comes to getting people’s attention. Or maybe it's just that it's a bit easier to for me as an outsider to come in to a new team and say "Hi! I've tried TDD on other projects and I know that it works!" Or maybe it's my persuasiveness/stubbornness? :) Either way, I often don't find it very hard to convince devs to start writing tests. What I find hard though, is to teach them how to write good unit tests. And as you point out in your question; to stay on the righteous path.
But I have found one method that I think works pretty well when it comes to teaching unit testing. I've blogged about it here, but the essence is to sit down and do some pair-programming. And doing the pair programming I start out writing the unit test first. This way I show them a bit how the testing framework work, how I structure the tests and often some use of mocking. Unit tests should be simple, so all in all the test should be fairly easy to understand even for junior devs. The worst part to explain is often the mocking, but using easy-to-read mocking frameworks like Moq helps a lot. Then when the test is written (and nothing compiles or passes) I hand over the keyboard to my fellow coder so that (s)he can implement the functionality. I simply tell her/him; "Make it go green!” Then we move on to the next test; I write the test, the 'soon-to-be-test-infected-dev' next to me writes the functionality.
Now, it's important to understand that at this point the dev(s) you are teaching are probably not yet convinced that this is the right way to code. The point where most devs seem to see the (green) light is when a test fails due to some code changes that they never thought would break any functionality. When the test that covers that functionality blows up, that's when you've got yourself a loyal TDD'er on your team. Or that's at least my experience, but as always; your mileage will vary :)
Quality speaks for itself. If you're more successful than everyone else, that's all you need to say.
Use a test-coverage tool. Make it very visible. That way everybody can easily see how much code in each area is passed, failed and untested.
Then you may be able to start a culture where "untested" is a sign of bad coding, "failed" is a sign of work in progress and "passed" is a sign of finished code.
This works best if you also do "test-first". Then "untested" becomes "you forgot step 1".
Of course you don't need 100% test coverage. But of one area has 1% coverage and another has 30%, you have a metric for which area is most likely to fail in production.
Lead by example. If you can get evidence that there are less regression on unit tested code that elsewhere.
Getting QA and management buy-in so that your process mandates unit testing.
Be ready to help others to get started with unit testing: provide assistance, supply a framework so that they can start easily, run an introductory presentation.
You just have to get used to the mantra "if it ain't tested, the work ain't done!"
Edit: To add some more meat to my facetious comment above, how can someone know if they're actually finished if they haven't tested their work?
Mind you, you will have a battle convincing others if time isn't allowed in the estimate for the testing of the devleoped code.
A one-to-one split for between effort for coding and that for testing seems to be a good number.
HTH
cheers,
Rob
Give compliments for one writes more test and produce good results and show the best one to others and ask them to produce the same or better result than this.
People (and processes) don't change without one or more pain points. So you need to finjd the significant pain points and demonstrate how unit testing might help deal with them.
If you can't find any significant pain points, then unit testing may not add a lot of value to your current process.
As Steve Lott implies, delivering better results than the other team members will also help. But without the pain points, my experience is that people won't change.
Two ways: convince the project manager that unit testing improves quality AND saves time overall, then have him make unit tests mandatory.
Or wait for a development crunch just before an important release date, where everyone has to work overtime and weekends to finish the last features and eliminate the last bugs, only to find they've just introduced more bugs. Then point out that with proper unit tests they wouldn't have to work like that.
Another situation where unit tests can be shown as indispensable is when a release was actually delivered and turns out to contain a serious bug due to last-minute changes.
If developers are seeing that the "successful" developers are writing unit tests, and they are still not doing it then I suggest unit tests should become part of the formal development life-cycle.
E.g. nothing can be checked in until a unit test is written and reviewed.
Probably reefnet_alex' answer helps you:
Is Unit Testing worth the effort?
I think it was Fowler who said:
"Imperfect tests, run frequently, are
much better than perfect tests that
are never written at all". I
interprate this as giving me
permission to write tests where I
think they'll be most useful even if
the rest of my code coverage is
woefully incomplete.
You mentioned that your manager is on board with unit tests. If that's the case, then why isn't he (she) enforcing it? It isn't your job to get everybody else to follow along or to teach them and in fact, other developers will often resent you if you try to push it on them. In order to get your fellow developers to write unit tests, the manager has to emphasize it strongly. It might end up that part of that emphasis is education on unit test implementation which you might end up being the educator and that's great, but management of it is everything.
If you're in an environment where the group decides the style of implementation, then you have more of a say in how the group dynamic should be. If you are in that sort of environment and the group doesn't want to emphasize unit tests while you do, then maybe you're in the wrong group/company.
I have found that "evangelizing" or preaching rarely works. As others have said, do it your way for your own code, make it known that you do it, but don't try to force others to do it. If people ask about it be supportive and helpful. Offer to do a few lunch-time seminars or informal dog and pony shows. That will do a lot more than just complaining to your manager or the other developers that you have a hard time writing tests for code they wrote.
Slow and steady - it is not going to change overnight.
Once I realized that at one place where I worked the acceptance for peer reviews improved tremendously. My group just did it and stopped trying to get others to do it. Eventually people started asking s about how we got some of the success we did. Then it was easier.
We have a test framework which includes automated running of the test suite whenever anyone commits a change. If someone commits code that fails the tests, the whole team gets emailed with the errors.
This leads to introduced bugs being fixed pretty quickly.

How to test an application? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions