Our code sucks and I'm powerless to fix it. Help! [closed] - c++

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our code sucks. Actually, let me clarify that. Our old code sucks. It's difficult to debug and is full of abstractions that few people understand or even remember. Just yesterday I spent an hour debugging in an area that I've worked for over a year and found myself thinking, "Wow, this is really painful." It's not anyone's fault - I'm sure it all made perfect sense initially. The worst part is usually It Just Works...provided you don't ask it to do anything outside of its comfort zone.
Our new code is pretty good. I think we're doing a lot of good things there. It's clear, consistent, and (hopefully) maintainable. We've got a Hudson server running for continuous integration and we have the beginnings of a unit test suite in place. The problem is our management is laser-focused on writing New Code. There's no time to give Old Code (or even old New Code) the TLC it so desperately needs. At any given moment our scrum backlog (for six developers) has about 140 items and around a dozen defects. And those numbers aren't changing much. We're adding things as fast as we can burn them down.
So what can I do to avoid the headaches of marathon debugging sessions mired in the depths of Old Code? Every sprint is filled to the brim with new development and showstopper defects. Specifically...
What can I do to help maintenance and refactoring tasks get high enough priority to be worked?
Are there any C++-specific strategies you employ to help prevent New Code from rotting so quickly?

Your management may be focused on getting working features into the product, and keeping them working. In this case, you will need to make a business case for refactoring the old stuff, in that by X investment of time and effort you can reduce necessary maintenance time by Y over period Z. Or your management may be fundamentally clueless (this happens, but less often than most developers seem to think), in which case you'll never get permission.
You need to see the business point of view. It doesn't matter to the end user whether the code is ugly or elegant, only what the software does. The cost of bad code is potential unreliability and additional difficulty in changing it; the emotional distress it causes to the programmer is rarely considered.
If you can't get permission to go in and refactor, you can always try it on your own, a little bit at a time. Whenever you fix a bug, do a little rewriting to make things clearer. This may turn out to be faster than the minimum possible fix, particularly in verifying that the code now works. Even if it isn't, it's usually possible to take a little more time on a bug fix without getting into trouble. Just don't get carried away.
If you can leave the code just a little better each time you go in, you'll feel a lot better about it.

Stand Up Meetings
I might go to my mechanic, and we have a little stand-up meeting in the morning:
I tell him I want my wheels aligned,
my tires rotated, and my oil changed.
I mention that "Oh by the way my
brakes felt a little soft on the way
in. Could [he] take a look at them?
How soon can I get my car back because
I need to get back to work?"
He pops his head under my car, pops
back up and says my brakes are leaking
oil and starting to fail. He will need
a part that will arrive at 10:30am.
His man won't finish before lunch, but
I should get my car back by 1:30pm or
so. He's booked solid so he won't be
able to do any of the other stuff
today, and I will have to book another
appointment.
I ask if he can do the other stuff and
I come back for the brake. He tells me
he really can't let me drive out of
there without fixing the brakes
because they might cause an accident,
but if I want to go to another
mechanic, he can call for a tow.
Since the car will be done so shortly
after lunch, I ask if his man can take
a late lunch so I can get my car back
an hour earlier.
He tells me his men come in at 8am and
often work into the evening.
They earn every break they
get, and his man deserves to take his
lunch with everyone else.
None of that is what I wanted to hear. I wanted to hear that I would drive out of there in a half hour with my wheels, tires and oil done.
My mechanic was just straight up and honest with me. Are you straight up and honest with your management? Or do you avoid telling them things they don't want to hear?
Unit Testing
I wouldn't touch a line of code I didn't understand, and I wouldn't check in a new line of code I didn't test thoroughly. (At least, not intentionally.)
Your question seems to imply that somehow a large corpus of poorly documented code made it past review without any unit tests. Maybe you participated in that, and maybe you didn't. Everyone involved needs to accept responsibility for that--including management. Regardless, what's done is done. You cannot go back and change it.
However, right now, in the present time, it is everybody's responsibility to stop the behavior that led to the problem in the first place. You say you spent a year working in code that you find difficult to understand and that has no unit tests. During that year, as you worked hard to improve your understanding, how many unit tests did you write to document and to verify that understanding?
As you struggled through the code slowly gaining understanding, how many comments did you add so you wouldn't have to struggle next time?
Scrum Backlog
Personally, I think the term "Scrum backlog" is a misnomer. A list of things to do is just a list--a shopping list if you will. I had a list when I went to the mechanic. My stand up meeting with the mechanic was really more of a sprint planning meeting.
A sprint planning meeting is a negotiation. If your management is time boxing without that negotiation, they aren't managing anything. They are simply trying to cram 10 lbs of shit into a 5 lb sack, and it's your responsibility to tell them so.
When you show up to a sprint planning meeting, you are expected to commit to a body of work, and it's your responsibility to prepare for that. Preparation means having some idea of what you will have to do to complete each item on the list--including the time it takes to understand obscure code and the time it takes to write unit tests.
If someone invites you to a planning meeting where you won't have time to prepare, decline the meeting and suggest when to reschedule so you will have time.
If you have an existing body of code with no unit tests and a feature might conceivably affect the operation of that code, you need to write unit tests for as much of the old code as might be affected. When you commit to writing the feature, you are committing to doing that work. If that leaves you too little time to commit to some other feature, just say so. Don't commit to the other feature.
When you commit to fix a defect, you commit to testing your work. Obviously, that means writing a unit test for the defect. But if it involves old code with no unit tests, it also means writing unit tests for things that aren't broken yet, but might break due to your change. How else will you test the fix?
If your defect list remains a constant size, your team regresses as much as it fixes. Politely explain to whomever needs to understand that unit tests prevent the regressions that currently keep your defect list from shrinking.
If you fail to write those unit tests because you commit to too many features, whose responsibility is that?
Refactoring
When you refactor code, you have to test all of it, and that means writing unit tests for all of it. If you have a large body of code with no unit tests, you will have to write all of those unit tests before you refactor.
I suggest you hold off on refactoring until those unit tests are in place. In the meantime, if you insist on including unit tests in your estimates for the work you commit to, eventually all those unit tests will be there. And then you can refactor.
The one exception to that is refactoring for testability. You may find that some of the code was not designed for test and that you have to refactor for things like dependency injection before you can create your unit tests. When you commit to writing the feature that requires the unit test, you commit to making the code testable. Include that in your estimate when you commit to the feature.
Commitment + Responsibility = Power
You say you are powerless. When you accept responsibility and commit to doing what needs doing, I think you will find you have all the power you need.
P.S. If anyone complains about anybody "wasting time" writing multiple unit tests when fixing a single defect, show them this video on the 80:20 rule and pound "defect clusters" into their brains.

It is hard to tell much from the information you give. Some questions I would have is a logical reason to be writing new code is to replace the old code. If that is what you are doing, abandon the old code.
Is it also old code that has showstopper defects? If so where are they coming from? Old code does not have "showstopper" defects, it just grinds closer and closer to a halt usually. It is old code after all - it should have the same old defects and the same old limitations, not stuff that has to be looked at right away. Showstopper defects are new code defects. It sounds like there is active development on in the old code.
If you are writing all this new code on top of old code that sucks, with no plans to fix it once and for all, sorry, there is only so much you can do when you are too busy burying yourself to dig yourself out.
If the latter is the case. you should recognize where you are headed, and try to detach a little. It is going to all collapse eventually, if you plan on being around save your strength for a worthwhile battle.
In the meantime try to pick up some design patterns. There are several that can at least help shield you new code from the old stuff, but still, ultimately it is just hard to write good code against bad code.
And your sprints sound maybe confused. Is there not an overall direction? That should determine how much backlog you have, although things can change month to month, is there not a clear sense of moving towards some final goal?
And new code rotting? The way you prevent that is you have a meaningful design, a meaningful direction, and a quality team that is committed to both the quality of their work and the vision of the design. If you have that, discipline is what maintains quality. If you don't have that sorry, you basically were writing code with no purpose already. It was basically rotten on the vine.
Not being critical, just trying to be honest. Take a deep breath. Slow down. You seem like you need it. Look at what you have written here. It tells nothing. You talk of refactor, scrums, showstoppers, defects, old code, new code. What does any of that mean? It is all jumbled up.
What about "new initiatives versus legacy systems"? "Need to refactor early sprint cycle code in terms of latest understanding etc." Are showstoppers in fact "Early components of the current enterprise initiatives have been released but are experiencing problems and no time is budgeted because of new development".
These would be meaningful concepts. You've given us nothing. I understand it is intense. My sprints are crazy too, we add a lot of back;pg items because we could not get many requirements up front (a lot of my new requirements result from having to also contend with external regulatory bodies, the normal business process is not always available).
But at the same time I am ground down by the sheer magnitude of what has to be done and the time to do it. Everything that is added to my backlog needs to be there. It is crazy, but at the same time I have a very clear idea of where I have been, where I need to go, and why the road is getter harder.
Step back, clear your thoughts, figure out the same - where you have been and where you are going. Because if you know that, it sure is not obvious. If you cannot communicate anything your peers can understand, how far are you going to get with a business manager?

Old code always sucks. There's probably some rare exceptions written by people with names like Kernighan or Thompson but, for the typical "code written in an office" stuff, over time it's gonna stink. Developers get more experienced. Newer practices, such as continuous integration, change the game. Stuff get's forgotten. New maintainers fail to grasp designs and wish for re-writes. So best accept this as normal.
Some random things that might help...
Talk about it with your team. Share your experiences and your concerns, while avoiding "man your old code sucks" (for obvious reasons) and see what the consensus is. You're probably not alone.
Forget about your managers. Don't expose them to this level of detail - they don't need to think about new vs. old code and probably won't understand if they do. This is a problem for your team to tackle and, if necessary, to make your PO aware of
Be open to the possibility that you may be able to throw stuff out. Some of that old code probably relates to features that are no longer being used or failed to be adopted by users in the first place. To make this work for you, you really need to go a level higher and think in terms of where the code really delivers user or business value vs. where it's just a ball of mud that no one is brave enough to take a decision on. Who dares, wins.
Relax your view of architectural consistency. There's always a way to tap into a working system with new code somewhere, and that may allow you to slowly migrate to a newer, smarter approach, while preserving the old long enough not to break existing things.
Overall, winning in this kind of situation is less about coding skills and much more about smart choices and handling the human aspects.
Hope that helps.

I recommend keeping track of how many bugs and code changes involve your "old code" and present this to either your manager or to your fellow developers at your next team meeting. With this in hand it should be simple enough to convince them that more needs to be done to refactor your "old code" and bring it up to par with your "new code".
It would also be prudent to document the parts of your "old code" that are most difficult to understand. These would also be the parts of your "old code" that you should be refactoring first once you get the approval.

Something to try: group your class into - say - worst 10%, best 10%, and the rest. Deliver the lists to your management, saying, "I predict the majority of bugs over the next quarter will be found in the first set." Based on length, cyclomatic complexity, test coverage - whatever tools are handy and comfortable to you. Then sit back and watch - and be right. Now you've got some credibility, some leverage when you say, "I'd like to invest some resources in making our bad code better, to reduce bugs and maintenance costs - and I know where to invest that energy, see?"

You could create diagrams and sketches of how the new code works and how the classes and functions are related to one another. You could use FreeMind or maybe Dia. And I definitely agree with Documenting and commenting your code.
I once had a problem with this too. I wrote a font class for J2ME for my own language. It was awful for these reasons that maybe you might also see in your code.
No Comments or documentation
Less object oriented
bad variable / function names
...
But after a few months I was forced to write the whole thing again. Now I've learned to use meaningful variable names that are sometimes VERY long. write comments more than writing codes. And using diagrams for the project's classes and their relationships.
I don't know If it was a real answer but it definitely worked for me. and for old codes you might actually have to reread the whole thing and add comments when you remember the functionalities.
Hope it helped.

Talk to your Product Owner! Explain that time invested in refactoring the old code will bring him benefit of higher team velocity on new features once this obstacle is removed.

Other than the approaches mentioned above which are good, you can also try these:
For keeping future code clean
Try pair programming, at least for parts that make sense. It's an effective way of getting reviewed, refactored code a practice.
Try to get refactoring onto the definition of "done". Then it will be part of the estimation process and allotted accordingly. So the definition of done might include: coded, unit tested, functionally tested, performance tested, code reviewed, refactored, and integrated (or something like this).
For Cleaning up the old code:
Unit tests are great for helping you refactor and figure out how things work.
I agree with the comments that a business case needs to be made for large-scale refactoring. But, small-scale refactoring could be easily included in the estimate and will provide immediate return. i.e.: I spend 2 hours rewriting a piece but I would have spent that time looking for bugs anyway.
You may also want to consider getting the product owner and scrummaster to capture a separate velocity for the old code vs the new code, and use that accordingly.

If there's a desired new feature and you can delineate a non-overwhelming hunk of code that is in the way, then you might be able to get management's blessing to replace the old code with new code that has the desired new feature. When I did this, I had to write a somewhat ugly shim layer to meet the old interfaces of the part of the software I wasn't going to touch. And a test harness that could exercise the existing code and exercise the new code to make sure the new code, as seen through the shim layer, could fool the rest of the application into thinking nothing had changed. By reworking the portion we reworked, we were able to show huge performance benefits, compatibility with desired new hardware, reduction in each of our field site's needs for expertise in administering space for the application - and the new code was much more maintainable. That last point mattered not a whit to the users, but the other advantages from the rework were attractive enough to "sell" the users on the merits of a somewhat painful database conversion.
Another more modest success story: we had a decent trouble tracking system that had literally years of history. There was a subsystem of our application that was famed for the speed with which it would burn out maintenance programmers. Clearly (well, clearly in my mind) it was in need of a major re-write, but management wasn't enthused about that. We were able to dig through the history in the trouble tracking data to show the staffing level that had gone into maintaining this module, and for all that effort, the trouble tickets per month against that module continued to arrive at a constant rate. When faced with actual data like that, even the reluctant managers who had long been tight-fisted about staffing re-work of that subsystem could see the merit of assigning staff to rework that module.
The approach as before was to leave the input and output of that module alone. The good news was that throwing virtual memory at the new code with its fancy new data structures did give a noticeable performance improvement to the module. The bad news is that we were nearly done with the re-implementation before we really understood what was wrong in the original implementation such that it did work most of the time, but managed to fail on some of the transactions on some days. The first cut faithfully reproduced those bugs, but the bugs were easier to understand in the reworked code so we now had a shot at really fixing the real problem. In retrospect, maybe we'd have been smarter to have captured data that produced the problems and have taken better care to make sure the reworked version didn't reproduce that problem. But, the truth is, nobody understood the problem until we were quite far along on the re-write. So, the re-write gave improved performance to the users and improved understanding to the current programmers, such that the real problem could really be resolved at last.
A fail example: There was yet another incredibly ugly module that persistently was a sore spot. Alas, I wasn't clever enough to be able to understand the defacto interfaces to this particular wretched hive of scum and villainy, at least not in the time frame of the nominal release schedule. I'd like to believe that given more time we could have figured out a suitable plan for re-working that piece of the system too, and maybe once we understood it, we could even identify user-desired improvements that we could fit into the re-write. But I can't promise that you'll find a prize in every box. If the box is entirely obscure to you, slicing away a chunk of it and replacing that piece with clean code is hard to do. The guy who had charge of that module is probably the one who was best positioned to figure out a plan of attack, but he saw the frequent crashes and calls from the field for assistance as "job security". I don't think management ever really recognized that he needed to be eased aside for someone with a hunger for change, but that's what probably was needed.
Drew

Related

Any advice for a developer given the task of enhancing & refactoring a business critical application?

Recently I inherited a business critical project at work to "enhance". The code has been worked on and passed through many hands over the past five years. Consultants and full-time employees who are no longer with the company have butchered this very delicate and overly sensitive application. Most of us have to deal with legacy code or this type of project... its part of being a developer... but...
There are zero units and zero system tests. Logic is inter-mingled (and sometimes duplicated for no reason) between stored procedures, views (yes, I said views) and code. Documentation? Yeah, right.
I am scared. Yes, very sacred to make even the most minimal of "tweak" or refactor. One little mishap, and there would be major income loss and potential legal issues for my employer.
So, any advice? My first thought would be to begin writing assertions/unit tests against the existing code. However, that can only go so far because there is a lot of logic embedded in stored procedures. (I know its possible to test stored procedures, but historically its much more difficult compared to unit testing source code logic).
Another or additional approach would be to compare the database state before and after the application has performed a function, make some code changes, then do database state compare.
I just rewrote thousands of lines of the most complex subsystem of an enterprise filesystem to make it multi-threaded, so all of this comes from experience. If the rewrite is justified (it is if the rewrite is being done to significantly enhance capabilities, or if existing code is coming in the way of putting in more enhancements), then here are the pointers:
You need to be confident in your own abilities first of all to do this. That comes only if you have enough prior experience with the technologies involved.
Communicate, communicate, communicate. Let all involved stake-holders know, this is a mess, this is risky, this cannot be done in a hurry, this will need to be done piece-meal - attack one area at a time.
Understand the system inside out. Document every nuance, trick and hack. Document the overall design. Ask any old-timers about historical reasons for the existence of any code you cannot justify. These are the mines you don't want to step on - you might think those are useless pieces of code and then regret later after getting rid of them.
Unit test. Work the system through any test-suite which already exists, otherwise first write the tests for existing code, if they don't exist.
Spew debugging code all over the place during the rewrite - asserts, logging, console prints (you should have the ability to turn them on and off, as well specify different levels of output i.e. control verbosity). This is a must in my experience, and helps tremendously during a rewrite.
When going through the code, make a list of all things that need to be done - things you need to find out, things you need to write tests for, things you need to ask questions about, notes to remind you how to refactor some piece of code, anything that can affect your rewrite... you cannot afford to forget anything! I do this using Outlook Tasks (just make sure whatever you use is always in front of you - this is the first app I open as soon as I sit down on the desk). If I get interrupted, I write down anything that I have been thinking about and hints about where to continue after coming back to the task.
Try avoiding hacks in your rewrite (that's one of the reasons you are rewriting it). Think about tough problems you encounter. Discuss them with other people and bounce off your ideas against them (nothing beats this), and put in clean solutions. Look at all the tasks you put into the todo list - make a 10,000 feet picture of existing design, then decide how the new rewrite would look like (in terms of modules, sub-modules, how they fit together etc.).
Tackle the toughest problems before any other. That'll save you from running into problems you cannot solve near the end of tunnel, and save you from taking any steps backward. Of course, you need to know what the toughest problems will be - so again, better document everything first during your forays into existing code.
Get a very firm list of requirements.
Make sure you have implicit requirements as well as explicit ones - i.e. what programs it has to work with, and how.
Write all scenarios and use cases for how it is currently being used.
Write a lot of unit tests.
Write a lot of integration tests to test the integration of the program with existing programs it has to work with.
Talk to everyone who uses the program to find out more implicit requirements.
Test, test, test changes before moving into production.
CYA :)
Two things, beyond #Sudhanshu's great list (and, to some extent, disagreeing with his #8):
First, be aware that untested code is buggy code - what you are starting with almost certainly does not work correctly, for any definition of "correct" other than "works just like the unmodified code". That is, be prepared to find unexpected behavior in the system, to ask experts in the system about that behavior, and for them to conclude that it's not working the way it should. Prepare them for it to - warn them that without tests or other documentation, there's no reason to think it works they way they think it's working.
Next: Refactor The Low-Hanging Fruit Take it easy, take it slow, take it very careful. Notice something easy in the code - duplication, say - and test the hell out of whatever methods contain the duplication, then eliminate it. Lather, rinse, repeat. Don't write tests for everything before making changes, but write tests for whatever you're changing. This way, it stays releasable at every stage and you are continuously adding value, continuously improving the code base.
I said "two things", but I guess I'll add a third: Manage expectations. Let your customer know how scared you are of this task; let them know how bad what they've got is. Let them know how slow progress will be, and let them know you'll keep them informed of that progress (and, of course, do it). Your customer may think s/he's asking for "just a little fix" - and the functionality may indeed change only a little - but that doesn't mean it's not going to be a lot of work and a lot of time. You understand that; your customer needs to, too.
I've had this problem before and I've asked around (before the days of stack overflow) and this book has always been recommended to me. http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Ask yourself this: what are you trying to achieve? What is your mission? How much time do you have? What is the measurement for success? What risks are there? How do you mitigate and deal with them?
Don't touch anything unless you know what it is you're trying to achieve.
The code might be "bad" but what does that mean? The code works right? So if you rewrite the code so it does the same thing you'll have spent a lot of time rewriting something introducing bugs along the way so the code does the same thing? To what end?
The simplest thing you can do is document what the system does. And I don't mean write mind-numbing Word documents no one will ever read. I mean writing tests on key functionality, refactoring the code if necessary to allow such tests to be written.
You said you are scared to touch the code because of legal, income loss and that there is zero documentation. So do you understand the code? The first thing you should do is document it and make sure you understand it before you even think about refactoring. Once you have done that and identified the problem areas make a list of your refactoring proposals in the order of maximum benefit with minimum changes and attack it incrementally. Refactoring makes additional sense if: the expected lifespan of the code will be long, new features will be added, bug fixes are numerous. As for testing the database state - I worked on a project recently where that is exactly what we did with success.
Is it possible to get a separation of the DB and non-DB parts, so that a DBA can take on the challenge of the stored procedures and databases themselves freeing you up to work on the other parts of the system? This also presumes that there is a DBA who can step up and take that part of the application.
If that isn't possible, then I'd make the suggestion of seeing how big is the codebase and if it is possible to get some assistance so it isn't all on you. While this could be seen as side-stepping responsibility, the point would be that things shouldn't be in just one person's hands usually as they can disappear at times.
Good luck!

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

Another one about measuring developer performance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know the question about measuring developer performance has been asked to death, but please bear with me. I know the age old debate about how you cannot measure performance of developers, but the reality is, at our company there is a "need" to do that one way or another.
I work for a relatively small company (small in terms of developers), and management felt the need to measure developer performance based on "functionality that passes test (QA) at first iteration".
We somehow managed to convince them that this was a bad idea for various reasons, and came up instead on measuring developers by putting code in test where all unit tests passes. Since in our team there is no "requirement" per se to develop unit tests before, we felt it was an opportunity to formalise the need to develop unit tests - i.e. put some incentive on developers to write unit tests.
My problem is this: since arguably we will not be releasing code to QA that do not pass all unit tests, how can one reasonably measure developer performance based on unit tests? Based on unit tests, what makes a good developer stand out?
Functionality that fail although unit test passes?
Not writing unit test for a given functionality at all, or not adequate unit tests written?
Quality of unit test written?
Number of Unit tests written?
Any suggestions would be much appreciated. Or am I completely off the mark in this kind of performance measurement?
Perhaps I am completely off the mark in this kind of performance measurement?
The question is not "what do we measure?"
The question is "What is broken?"
Followed by "how do we measure the breakage?"
Followed by "how do we measure the improvement?"
Until you have something you're trying to fix, here's what happens.
You pick something to measure.
People respond by doing what "looks" best according to that metric.
You realize you're measuring the wrong thing.
Specifically.
"functionalities that pass test (QA) at first iteration" Which means what? Save the code until it HAS to work. Later looks better. So, delay until you pass QA on the first iteration.
"Functionality that fail although unit test passes?" This appears to be "incomplete unit tests". So you overtest everything. Take plenty of time to write all possible tests. Slow down delivery so you're not penalized by this measurement.
"Not writing unit test for a given functionality at all, or not adequate unit tests written?" Not sure how you measure this, but it sounds the same as the previous one.
.
"Quality of unit test written?" Subjective measurement. Always a good plan. Define how you're going to measure quality, and you'll get stuff that maximizes that specific measurement. Want more comments? Count those. What more whitespace? Count that.
"Number of Unit tests written?" Nothing motivates me to write redundant tests like counting the number of tests. I can easily copy and paste nearly identical code if it makes me look good according to this metric.
You get what you measure. No matter what metric you put in place, you will find that the specific thing measured will subvert most other quality concerns. Whatever you measure, but absolutely sure you want people to maximize that measurement while reducing others.
Edit
I'm not saying "Don't Measure". I'm saying "you get what you measure". Pick a metric that you want maximized at the expense of others. It's not hard to pick a metric. Just know the consequence of telling management what to measure.
I would argue that unit tests are a quality tool and not a productivity tool. If you want to both encourage unit testing and to give management a productivity metric, make unit testing mandatory to get code into production, and report on productivity based on code/features that makes it into production over a given time frame (weekly, bi-weekly, whatever). If we take as a given that people will game any system, then design the game to meet your goals.
I think Joel had it spot-on when he said that this sort of measurement will be gamed by your developers. It will not achieve what it set out to and you will likely end up with quality suffering (from the perception of everyone using the system) whilst your measurements of quality all suggest things have never been better!
edit. You say that management are demanding this. You are a small company; your management cannot afford everyone to up sticks and leave. Tell them that this is rubbish and you'll play no part in it.
If the whole idea is so that they can rank people to make them redundant (it sounds like it might be at this time), just ask them how many people have to go and then choose those developers you believe to be the worst, using your intelligence and judgement and not some dumb rule-of-thumb
For some reason the defect black market comes to mind... although this is somewhat in reverse.
Any system based on metrics when it comes to developers simply isn't going to work, because it isn't something you can measure using conventional methods. Whatever you try to put in place with regards to anything like this will be gamed (because solving problems is what we do all day, and this is just another problem to be solved) and it will be detrimental to your code (for example I wrote a simple spelling corrector the other day with about 5 unit tests which were sufficient to check it worked, but if I was measured on unit tests I could have spent another day writing another 100 which would all pass but would add no value).
You need to work out why management want this system in place. If it's to give rewards then you should have a look at Joel Spolsky's article about incentive pay which is not far off the mark from what I've seen (think about bonus day and see how many people are really happy -- none as they just got what they thought they deserved -- and how many people are really pissed off -- anyone who got less than they thought they deserved).
To quote Steve Yegge:
shouldn't there be a rule that companies aren't allowed to do things that have been formally ridiculed in a Dilbert comic?
There was just some study I read in the newspaper here at home in Norway. In a nutshell it said that office types of jobs generally had no benefit from performance pay. The reason being that measuring performance in most office types of jobs was almost impossible.
However simpler jobs like e.g. strawberry picking benefited from performance pay because it is really easy to measure performance. Nobody is going to feel bad because a high performer get a higher pay because everybody can clearly see that he or she has picked more berries.
In an office it is not always clear that the other person did a better job. And so a lot of people will be demotivated. They tested with performance pay on teachers and found that it gave negative results. People who got higher pay often didn't see why they did better than others and the ones who got lower usually couldn't see why they got lower.
What they did find though was that non-monetary rewards usually helped. Getting encouraging words from the boss for well done jobb etc.
Read iCon on how Steve Jobs managed to get people to perform. Basically he made people believe that they were part of something big and were going to change the world. That is what makes people put in an effort and perform. I don't think developers will put in a lot of effort for just money. It has to be something they really believe in and/or think is fun or enjoyable.
If you are going to tie people's pay to their unit test performance, the results are not going to be good.
People are going to try to game the system.
What I think you are after is:
You want people to deploy code that works and has a minimum number of bugs
You want the people that do that consistently to be rewarded
Your system will accomplish neither.
By tying people's pay to whether or not their tests fail, you are creating a disincentive to writing tests. Why would someone write code that, at beast, yields no benefit, and at worst limits their salary? The overall incentive will be to keep the size of the test bed minimal, so that the likely hood of failure is minimized.
This means that you will get more bugs, except they will be bugs you just don't know about.
It also means that you will be rewarding people that introduce bugs, rather than those that prevent them.
Basically you'll get the opposite of your objectives.
These are my initial thoughts on your four specific questions:
Tricky this one. At first glance it looks OK, but if the code passes unit test then, unless the developers are cheating (see below) or the test itself is wrong, it's difficult to see how you'd demonstrate this.
This seems like the best approach. All functions should have a unit test and inspection of the code should be able to reveal which ones are present and which are absent. However, one drawback could be that the developers write an empty test (i.e. one that just returns "passed" without actually testing anything). You might have to invest in lengthy code reviews to spot this one.
How are you going to assess quality? Who is going to assess quality? This assumes that your QA team has access to highly skilled independent developers - which may be true, but seems unlikely.
Counting the number of anything (lines of code, unit tests written) is a non starter. Developers will simply write large number of useless tests.
I agree with oxbow_lakes, and in fact the other answers that have appeared since I started writing this - most forms of measurement will be gamed or worse resented by developers.
I believe time is the only, albeit subjective, way to measure a developers performance.
Given enough time in any one company, good developers will stand out. Projectleaders will know who their best assets are. Bad developers will be exposed given enough time. Unfortunatly, therein lies the ultimate problem, enough time.
Basic psychology - People work to incentives. If my chances of getting a bonus / keeping my job / whatever are based on the number of tests I write, I'll write tons of meaningless tests - probably at the expense of actually doing my real job, which is getting a product out the door.
Any other basic metric you can come up with will suffer the same problem and be equally meaningless.
If you insist on "rating" devs, you could use something a bit more lateral. Scores on one of the MS certification tests perhaps (which has the side effect of getting people trained up). At least that's objective and independently verified by a neutral third party so you can't "game" it. Of course that score also bears no resemblance to the person's effectiveness in your team but it's better than an arbitrary internal measurement.
You might also consider running code through some sort of complexity measurement tool (simpler==better) and scoring people on their results. Again, it has the effect of helping people to become better coders, which is what you really want to achieve.
Poor Ash...
Kudos for using managerial inorance to push something completely unrelated, but now you have to come up with a feasible measure.
I cannot come up with any performance measurement that is not ridiculous or easily gamed. Unit tests cannot change it. Since Kopecks and Black Market were linked within minutes, I'd rather give you ammunition for not requiring individual performance measurements:
First, Software is an optimization between conflicting goals. Evaluating one or a few of them - like how many tests come up during QA - will lead to severe tradeoffs in other areas that hurt the final product.
Second, teamwork means more than just the product of a few individuals glued together. The synergistic effects cannot be tracked back to the effort or skill of a single individual - and when developing software in a team, they have huge impact.
Third, the total cost of software unfolds only after time. Maintenance, scalability, compatibility with new platforms, interaction with future products all carry a significant long term cost. Measuring short term cost (year-over-year, or release to production) does not cover the long term cost at all, and once the long term cost is known it is pointless to track it back to the originator.
Why not have each developer "vote" on their collegues: who helped us achieve our goals most in the last year? Why not trust you (as - apparently - their manager or lead) in judging their performance?
There should be a combination of a few factors to the unit tests that should be fairly easy for someone outside the development group to have a scorecard in terms of measuring the following:
1) How well do the unit tests cover the code and any common input data that may be entered for UI elements? This may seem like a basic thing but it is a good starting point and is something that can be quantified easily with tools like nCover I think.
2) Are there boundary conditions often tested,e.g. nulls for parameters or letters instead of numbers and other basic validation tests? This is also something that can be quantified easily by looking at parameters for various methods as well as having coding standards to prevent bypassing things here, e.g. all the object's methods besides the constructor take 0 parameters and thus have no boundary tests.
3) Granularity of a unit test. Does the test check for one specific case and not try to do lots of different cases in one test? Are test classes containing thousands of lines of code?
4) Grade the code and tests in terms of readability and maintainability. Would someone new have to spend days figuring out what is going on or is the code somewhat self-documenting? Examples would include method names and class names being meaningful and documentation being there?
That last 3 things are what I suspect a manager, team lead or someone else that is outside the group of developers could rank and handle. There may be some games to this to exploit things but the question is what end results do you want to have? I'm thinking well-documented, high quality, easily understood code = good code.
Look up Deming and Total Quality Management for his thoughts on why performance appraisals should not be done at all for any job.
How about this instead, assume all employees are acceptable employees unless proved different.
If someone does something unacceptable or does not perform to level you need, write them up as a performance problem. Determine how many writeups they get before you boot them out of the company.
If someone does something well, write them up for doing something good. If you want to offer a bonus, give it at the time the good performance happens. Even better make sure you announce when people get an attaboy. People will work towards getting them. Sure you will have the policial types who will try to game the system and get written up onthe basis of others achievements but you get that anyway in any system. By announcing who got them at the time of the good performance, you have removed the secrecy that allows the office politics players to function best. If everyone knows Joe did something great and you reward Mary instead, people will start to speak up about it. At the very least Joe and Mary might both get an attaboy.
Each year, give everyone the same percentage pay raise as you have only retained the workers who have acceptable performance and you have rewarded the outstanding employees through the year anytime they did something good.
If you are stuck with measuring, then measure how many times you wrote someone up for poor performance and how many times you wrote someone up for good performance. Then you have to be careful to be reasonably objective about it and write up even the people who aren't your ffriends when they do good and the people who are your friends when they do bad. But face it the manager is going to be subjective in the process no matter how you insist on objective criteria becasue there is no object criteria in the real world.
Definetely, and following the accepted answer unit tests are not a good way to measure development performance. They in fact can be a investment with little to no return.
… Automated tests, per se, do not increase our code quality but do need code output
– From Measuring a Developer's impact
Reporting on productivity based on code/features that makes it into production over a given time frame and making unit tests mandatory is actually a good system. The problem is that you get little feedback from it, and there might be too many excuses to meet a goal. Also, features / refactors / enhancements, can be of very different sizes and nature, so it wouldn't be fair to compare in most ocassions as relevant for the organisation.
Using a version control system, as git, we can atomize the minimum unit of valuable work into commits / PRs. Visualization (as in the quote linked above) is a better and more noble objective for management to perceive, rather than having a flat ladder or metric to compare their developers into.
Don't try to measure raw output. Try to understand developer work, go visualize it.

Disadvantages of Test Driven Development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do I lose by adopting test driven design?
List only negatives; do not list benefits written in a negative form.
If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points.
When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves).
At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests.
And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code.
Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can.
It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus.
All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming.
If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code.
So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road.
Several downsides (and I'm not claiming there are no benefits - especially when writing the foundation of a project - it'd save a lot of time at the end):
Big time investment. For the simple case you lose about 20% of the actual implementation, but for complicated cases you lose much more.
Additional Complexity. For complex cases your test cases are harder to calculate, I'd suggest in cases like that to try and use automatic reference code that will run in parallel in the debug version / test run, instead of the unit test of simplest cases.
Design Impacts. Sometimes the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind.
Continuous Tweaking. For data structures and black box algorithms unit tests would be perfect, but for algorithms that tend to be changed, tweaked or fine tuned, this can cause a big time investment that one might claim is not justified. So use it when you think it actually fits the system and don't force the design to fit to TDD.
When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one.
Also, you might start making design decisions based more on TDD than on actually good design prinicipals. Whereas you may have had a very simple, easy solution that is impossible to test the way TDD demands, you now have a much more complex system that is actually more prone to mistakes.
I think the biggest problem for me is the HUGE loss of time it takes "getting in to it". I am still very much at the beginning of my journey with TDD (See my blog for updates my testing adventures if you are interested) and I have literally spent hours getting started.
It takes a long time to get your brain into "testing mode" and writing "testable code" is a skill in itself.
TBH, I respectfully disagree with Jason Cohen's comments on making private methods public, that's not what it is about. I have made no more public methods in my new way of working than before. It does, however involve architectural changes and allowing for you to "hot plug" modules of code to make everything else easier to test. You should not be making the internals of your code more accessible to do this. Otherwise we are back to square one with everything being public, where is the encapsulation in that?
So, (IMO) in a nutshell:
The amount of time taken to think (i.e. actually grok'ing testing).
The new knowledge required of knowing how to write testable code.
Understanding the architectural changes required to make code testable.
Increasing your skill of "TDD-Coder" while trying to improve all the other skills required for our glorious programming craft :)
Organising your code base to include test code without screwing your production code.
PS: If you would like links to positives, I have asked and answered several questions on it, check out my profile.
In the few years that I've been practicing Test Driven Development, I'd have to say the biggest downsides are:
Selling it to management
TDD is best done in pairs. For one, it's tough to resist the urge to just write the implementation when you KNOW how to write an if/else statement. But a pair will keep you on task because you keep him on task. Sadly, many companies/managers don't think that this is a good use of resources. Why pay for two people to write one feature, when I have two features that need to be done at the same time?
Selling it to other developers
Some people just don't have the patience for writing unit tests. Some are very proud of their work. Or, some just like seeing convoluted methods/functions bleed off the end of the screen. TDD isn't for everyone, but I really wish it were. It would make maintaining stuff so much easier for those poor souls who inherit code.
Maintaining the test code along with your production code
Ideally, your tests will only break when you make a bad code decision. That is, you thought the system worked one way, and it turns out it didn't. By breaking a test, or a (small) set of tests, this is actually good news. You know exactly how your new code will affect the system. However, if your tests are poorly written, tightly coupled or, worse yet, generated (cough VS Test), then maintaining your tests can become a choir quickly. And, after enough tests start to cause more work that the perceived value they are creating, then the tests will be the first thing to be deleted when schedules become compressed (eg. it gets to crunch time)
Writing tests so that you cover everything (100% code coverage)
Ideally, again, if you adhere to the methodology, your code will be 100% tested by default. Typically, thought, I end up with code coverage upwards of 90%. This usually happens when I have some template style architecture, and the base is tested, and I try to cut corners and not test the template customizations. Also, I have found that when I encounter a new barrier I hadn't previously encountered, I have a learning curve in testing it. I will admit to writing some lines of code the old skool way, but I really like to have that 100%. (I guess I was an over achiever in school, er skool).
However, with that I'd say that the benefits of TDD far outweigh the negatives for the simple idea that if you can achieve a good set of tests that cover your application but aren't so fragile that one change breaks them all, you will be able to keep adding new features on day 300 of your project as you did on day 1. This doesn't happen with all those who try TDD thinking it's a magic bullet to all their bug-ridden code, and so they think it can't work, period.
Personally I have found that with TDD, I write simpler code, I spend less time debating if a particular code solution will work or not, and that I have no fear to change any line of code that doesn't meet the criteria set forth by the team.
TDD is a tough discipline to master, and I've been at it for a few years, and I still learn new testing techniques all the time. It is a huge time investment up front, but, over the long term, your sustainability will be much greater than if you had no automated unit tests. Now, if only my bosses could figure this out.
On your first TDD project there are two big losses, time and personal freedom
You lose time because:
Creating a comprehensive, refactored, maintainable suite of unit and acceptance tests adds major time to the first iteration of the project. This may be time saved in the long run but equally it can be time you don't have to spare.
You need to choose and become expert in a core set of tools. A unit testing tool needs to be supplemented by some kind of mocking framework and both need to become part of your automated build system. You also want to pick and generate appropriate metrics.
You lose personal freedom because:
TDD is a very disciplined way of writing code that tends to rub raw against those at the top and bottom of the skills scale. Always writing production code in a certain way and subjecting your work to continual peer review may freak out your worst and best developers and even lead to loss of headcount.
Most Agile methods that embed TDD require that you talk to the client continually about what you propose to accomplish (in this story/day/whatever) and what the trade offs are. Once again this isn't everyone's cup of tea, both on the developers side of the fence and the clients.
Hope this helps
TDD requires you to plan out how your classes will operate before you write code to pass those tests. This is both a plus and a minus.
I find it hard to write tests in a "vacuum" --before any code has been written. In my experience I tend to trip over my tests whenever I inevitably think of something while writing my classes that I forgot while writing my initial tests. Then it's time to not only refactor my classes, but ALSO my tests. Repeat this three or four times and it can get frustrating.
I prefer to write a draft of my classes first then write (and maintain) a battery of unit tests. After I have a draft, TDD works fine for me. For example, if a bug is reported, I will write a test to exploit that bug and then fix the code so the test passes.
Prototyping can be very difficult with TDD - when you're not sure what road you're going to take to a solution, writing the tests up-front can be difficult (other than very broad ones). This can be a pain.
Honestly I don't think that for "core development" for the vast majority of projects there's any real downside, though; it's talked down a lot more than it should be, usually by people who believe their code is good enough that they don't need tests (it never is) and people who just plain can't be bothered to write them.
Well, and this stretching, you need to debug your tests. Also, there is a certain cost in time for writing the tests, though most people agree that it's an up-front investment that pays off over the lifetime of the application in both time saved debugging and in stability.
The biggest problem I've personally had with it, though, is getting up the discipline to actually write the tests. In a team, especially an established team, it can be hard to convince them that the time spent is worthwhile.
The downside to TDD is that it is usually tightly associated with 'Agile' methodology, which places no importance on documentation of a system, rather the understanding behind why a test 'should' return one specific value rather than any other resides only in the developer's head.
As soon as the developer leaves or forgets the reason that the test returns one specific value and not some other, you're screwed. TDD is fine IF it is adequately documented and surrounded by human-readable (ie. pointy-haired manager) documentation that can be referred to in 5 years when the world changes and your app needs to as well.
When I speak of documentation, this isn't a blurb in code, this is official writing that exists external to the application, such as use cases and background information that can be referred to by managers, lawyers and the poor sap who has to update your code in 2011.
I've encountered several situations where TDD makes me crazy. To name some:
Test case maintainability:
If you're in a big enterprise, many chances are that you don't have to write the test cases yourself or at least most of them are written by someone else when you enter the company. An application's features changes from time to time and if you don't have a system in place, such as HP Quality Center, to track them, you'll turn crazy in no time.
This also means that it'll take new team members a fair amount of time to grab what's going on with the test cases. In turn, this can be translated into more money needed.
Test automation complexity:
If you automate some or all of the test cases into machine-runnable test scripts, you will have to make sure these test scripts are in sync with their corresponding manual test cases and in line with the application changes.
Also, you'll spend time to debug the codes that help you catch bugs. In my opinion, most of these bugs come from the testing team's failure to reflect the application changes in the automation test script. Changes in business logic, GUI and other internal stuff can make your scripts stop running or running unreliably. Sometimes the changes are very subtle and difficult to detect. Once all of my scripts report failure because they based their calculation on information from table 1 while table 1 was now table 2 (because someone swapped the name of the table objects in the application code).
If your tests are not very thorough you might fall into a false sense of "everything works" just because you tests pass. Theoretically if your tests pass, the code is working; but if we could write code perfectly the first time we wouldn't need tests. The moral here is to make sure to do a sanity check on your own before calling something complete, don't just rely on the tests.
On that note, if your sanity check finds something that is not tested, make sure to go back and write a test for it.
The biggest problem are the people who don't know how to write proper unit tests. They write tests that depend on each other (and they work great running with Ant, but then all of sudden fail when I run them from Eclipse, just because they run in different order). They write tests that don't test anything in particular - they just debug the code, check the result, and change it into test, calling it "test1". They widen the scope of classes and methods, just because it will be easier to write unit tests for them. The code of unit tests is terrible, with all the classical programming problems (heavy coupling, methods that are 500 lines long, hard-coded values, code duplication) and is a hell to maintain. For some strange reason people treat unit tests as something inferior to the "real" code, and they don't care about their quality at all. :-(
You lose the ability to say you are "done" before testing all your code.
You lose the capability to write hundreds or thousands of lines of code before running it.
You lose the opportunity to learn through debugging.
You lose the flexibility to ship code that you aren't sure of.
You lose the freedom to tightly couple your modules.
You lose option to skip writing low level design documentation.
You lose the stability that comes with code that everyone is afraid to change.
You lose a lot of time spent writing tests. Of course, this might be saved by the end of the project by catching bugs faster.
Refocusing on difficult, unforeseen requirements is the constant bane of the programmer. Test-driven development forces you to focus on the already-known, mundane requirements, and limits your development to what has already been imagined.
Think about it, you are likely to end up designing to specific test cases, so you won't get creative and start thinking "it would be cool if the user could do X, Y, and Z". Therefore, when that user starts getting all excited about potential cool requirements X, Y, and Z, your design may be too rigidly focused on already specified test cases, and it will be difficult to adjust.
This, of course, is a double edged sword. If you spend all your time designing for every conceivable, imaginable, X, Y, and Z that a user could ever want, you will inevitably never complete anything. If you do complete something, it will be impossible for anyone (including yourself) to have any idea what you're doing in your code/design.
You will lose large classes with multiple responsibilities.
You will also likely lose large methods with multiple responsibilities.
You may lose some ability to refactor, but you will also lose some of the need to refactor.
Jason Cohen said something like:
TDD requires a certain organization for your code. This might be architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable.
I say this indicates a missed abstraction -- if the private code really needs to be tested, it should probably be in a separate class.
Dave Mann
The biggest downside is that if you really want to do TDD properly you will have to fail a lot before you succeed. Given how many software companies work (dollar per KLOC) you will eventually get fired. Even if your code is faster, cleaner, easier to maintain, and has less bugs.
If you are working in a company that pays you by the KLOCs (or requirements implemented -- even if not tested) stay away from TDD (or code reviews, or pair programming, or Continuous Integration, etc. etc. etc.).
I second the answer about initial development time. You also lose the ability to confortably work without the safety of tests. I've also been described as a TDD nutbar, so you could lose a few friends ;)
It's percieved as slower. Long term that's not true in terms of the grief it will save you down the road, but you'll end up writing more code so arguably you're spending time on "testing not coding". It's a flawed argument, but you did ask!
It can be hard and time consuming writing tests for "random" data like XML-feeds and databases (not that hard). I've spent some time lately working with weather data feeds. It's quite confusing writing tests for that, at least as i don't have too much experience with TDD.
You have to write applications in a different way: one which makes them testable. You'd be surprised how difficult this is at first.
Some people find the concept of thinking about what they're going to write before they write it too hard. Concepts such as mocking can be difficult for some too. TDD in legacy apps can be very difficult if they weren't designed for testing. TDD around frameworks that are not TDD friendly can also be a struggle.
TDD is a skill so junior devs may struggle at first (mainly because they haven't been taught to work this way).
Overall though the cons become solved as people become skilled and you end up abstracting away the 'smelly' code and have a more stable system.
unit test are more code to write, thus a higher upfront cost of development
it is more code to maintain
additional learning required
Good answers all. I would add a few ways to avoid the dark side of TDD:
I've written apps to do their own randomized self-test. The problem with writing specific tests is even if you write lots of them they only cover the cases you think of. Random-test generators find problems you didn't think of.
The whole concept of lots of unit tests implies that you have components that can get into invalid states, like complex data structures. If you stay away from complex data structures there's a lot less to test.
To the extent your application allows it, be shy of design that relies on the proper ordering of notifications, events and side-effects. Those can easily get dropped or scrambled so they need a lot of testing.
Let me add that if you apply BDD principles to a TDD project, you can alleviate a few of the major drawbacks listed here (confusion, misunderstandings, etc.). If you're not familiar with BDD, you should read Dan North's introduction. He came up the concept in answer to some of the issues that arose from applying TDD at the workplace. Dan's intro to BDD can be found here.
I only make this suggestion because BDD addresses some of these negatives and acts as a gap-stop. You'll want to consider this when collecting your feedback.
It takes some time to get into it and some time to start doing it in a project but... I always regret not doing a Test Driven approach when I find silly bugs that an automated test could have found very fast. In addition, TDD improves code quality.
You have to make sure your tests are always up to date, the moment you start ignoring red lights is the moment the tests become meaningless.
You also have to make sure the tests are comprehensive, or the moment a big bug appears, the stuffy management type you finally convinced to let you spend time writing more code will complain.
The person who taught my team agile development didn't believe in planning, you only wrote as much for the tiniest requirement.
His motto was refactor, refactor, refactor. I came to understand that refactor meant 'not planning ahead'.
Development time increases : Every method needs testing, and if you have a large application with dependencies you need to prepare and clean your data for tests.
TDD requires a certain organization for your code. This might be inefficient or difficult to read. Or even architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable, which is just wrong.
When code changes, you have to change the tests as well. With refactoring this can be a
lot of extra work.

How to make junior programmers write tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a junior programmer that simply doesn't write enough tests.
I have to nag him every two hours, "have you written tests?"
We've tried:
Showing that the design becomes simpler
Showing it prevents defects
Making it an ego thing saying only bad programmers don't
This weekend 2 team members had to come to work because his code had a NULL reference and he didn't test it
My work requires top quality stable code, and usually everyone 'gets it' and there's no need to push tests through. We know we can make him write tests, but we all know the useful tests are those written when you're into it.
Do you know of more motivations?
This is one of the hardest things to do. Getting your people to get it.
Sometimes one of the best ways to help junior level programmers 'get it' and learn the right techniques from the seniors is to do a bit of pair programming.
Try this: on an upcoming project, pair the junior guy up with yourself or another senior programmer. They should work together, taking turns "driving" (being the one typing at they keyboard) and "coaching" (looking over the shoulder of the driver and pointing out suggestions, mistakes, etc as they go). It may seem like a waste of resources, but you will find:
That these guys together can produce code plenty fast and of higher quality.
If your junior guy learns enough to "get it" with a senior guy directing him along the right path (eg. "Ok, now before we continue, lets write at test for this function.") It will be well worth the resources you commit to it.
Maybe also have someone in your group give the Unit Testing 101 presentation by Kate Rhodes, I think its a great way to get people excited about testing, if delivered well.
Another thing you can do is have your Jr. Devs practice the Bowling Game Kata which will help them learn Test Driven Development. It is in java, but could easily be adapted to any language.
Have a code review before every commit (even if it's a 1 minute "I've changed this variable name"), and as part of the code review, review any unit tests.
Don't sign off on the commit until the tests are in place.
(Also - If his work wasn't tested - why was it in a production build in the first place? If it's not tested, don't let it in, then you won't have to work weekends)
For myself, I have started insisting that every bug I find and fix be expressed as a test:
"Hmmm, that's not right..."
Find possible problem
Write a test, show that the code fails
Fix the problem
Show that the new code passes
Loop if the original problem persists
I try to do this even while banging stuff out, and I get done in about the same time, only with a partial test suite already in place.
(I don't live in a commercial programming environment, and am often the only coder working a particular project.)
Imagine I am a mock programmer, named... Marco. Imagine I have graduated school not that long ago, and never really had to write tests. Imagine I work in a company that doesn't really enforce or asks for this. OK? good! Now imagine, that the company is switching to using tests, and they are trying to get me inline with this. I will give somewhat snarky reaction to items mentioned so far, as if I didn't do any research on this.
Let's get this started with the creator:
Showing that the design becomes simpler.
How can writing more, make things simpler. I would now have to keep tabs on getting more cases, and etc. This makes it more complicated if you ask me. Give me solid details.
Showing it prevents defects.
I know that. This is why they are called tests. My code is good, and I checked it for issues, so I don't see where those tests would help.
Making it an ego thing saying only bad programmers don't.
Ohh, so you think I am a bad programmer just because I don't do as much used testing. I'm insulted and positively annoyed at you. I would rather have assistance and support than sayings.
#Justin Standard: On start of new propect pair the junior guy up with yourself or another senior programmer.
Ohh, this is so important that resources will be spent making sure I see how things are done, and have some assist me on how things are done. This is helpful, and I might just start doing it more.
#Justin Standard: Read Unit Testing 101 presentation by Kate Rhodes.
Ahh, that was an interesting presentation, and it made me think about testing. It hammered some points in that I should consider, and it might have swayed my views a bit.
I would love to see more compelling articles, and other tools to assist me in getting in line with thinking this is the right way to do things.
#Dominic Cooney: Spend some time and share testing techniques.
Ahh, this helps me understand what is expected of me as far as techniques, and it puts more items in my bag of knowledge, that I might use again.
#Dominic Cooney: Answer questions, examples and books.
Having a point person (people) to answer question is helpful, it might make me more likely to try. Good examples are great, and it gives me something to aim for, and something to look for reference. Books that are relevant to this directly are great reference.
#Adam Hayle: Surprise Review.
Say what, you sprung something that I am completely unprepared for. I feel uncomfortable with this, but will do my best. I will now be scared and mildly apprehensive of this coming up again, thank you. However, the scare tactic might have worked, but it does have a cost. However, if nothing else works, this might just be the push that is needed.
#Rytmis: Items are only considered done when they have test cases.
Ohh, interesting. I see I really do have to do this now, otherwise I'm not completing anything. This makes sense.
#jmorris: Get Rid / Sacrifice.
glares, glares, glares - There is a chance I might learn, and with support, and assistance, I can become a very important and functional part of the teams. This is one of my handicaps now, but it won't be for long. However, if I just don't get it, I understand that I will go. I think I will get it.
In the end, the support of my team with play a large part in all this. Having a person take their time to assist, and get me started into good habits is always welcome. Then, afterward having a good support net would be great. It would always be appreciated to have someone come a few times afterward, and go over some code, to see how everything is flowing, not in a review per se, but more as a friendly visit.
Reasoning, Preparing, Teaching, Follow up, Support.
I've noticed that a lot of programmers see the value of testing on a rational level. If you've heard things like "yeah, I know I should test this but I really need to get this done quickly" then you know what I mean. However, on an emotional level they feel that they get something done only when they're writing the real code.
The goal, then, should be to somehow get them to understand that testing is in fact the only way to measure when something is "done", and thus give them the intrinsic motivation to write tests.
I'm afraid that's a lot harder than it should be, though. You'll hear a lot of excuses along the lines of "I'm in a real hurry, I'll rewrite/refactor this later and then add the tests" -- and of course, the followup never happens because, surprisingly, they're just as busy the next week.
Here's what I would do:
First time out... "we're going to do this project jointly. I'm going to write the tests and you're going to write the code. Pay attention to how I write the tests, coz that's how we do things around here and that's what I'll expect of you."
Following that... "You're done? Great! First let's look at the tests that are driving your development. Oh, no tests? Let me know when that is done and we'll reschedule looking at your code. If you're needing help to formulate the tests let me know and I'll help you."
He's already doing this. Really. He just doesn't write it down. Not convinced? Watch him go through the standard development cycle:
Write a piece of code
Compile it
Run to see what it does
Write the next piece of code
Step #3 is the test. He already does testing, he just does it manually. Ask him this question: "How do you know tomorrow that the code from today still works?" He will answer: "It's such a little amount of code!"
Ask: "How about next week?"
When he hasn't got an answer, ask: "How would you like your program to tell you when a change breaks something that worked yesterday?"
That's what automatic unit testing is all about.
As a junior programmer, I'm still trying to get into the habit of writing tests. Obviously it's not easy to pick up new habits, but thinking about what would make this work for me, I have to +1 the comments about code reviews and coaching/pair programming.
It may also be worth emphasising the long-term purpose of testing: ensuring that what worked yesterday is still working today, and next week, and next month. I only say that because in skimming the answers I didn't see that mentioned.
In doing code reviews (if you decide to do that), make sure your young dev knows it's not about putting him down, it's about making the code better. Because that way his confidence is less likely to get damaged. And that's important. On the other hand, so is knowing how little you know.
Of course, I don't really know anything. But I hope the words have been useful.
Edit: [Justin Standard]
Don't put yourself down, what you have to say is pretty much right on.
On your point about code reviews: what you will find is that not only will the junior dev learn in the process, but so will the reviewers. Everyone in a code review learns if you make it a collaborative process.
Frankly, if you are having to put that much effort into getting him to do something then you may have to come to terms with the thought that he may just not be a good fit for the team, and may need to go. Now, this doesn't necessarily mean firing him... it may mean finding someplace else in the company his skills are more suited. But if there is no place else...you know what to do.
I'm assuming he is also a fairly new hire (< 1 year) and probably recently out of school...in which case he may not be accustomed to how things work in a corporate setting. Things like that most of us could get away with in college.
If this is the case, one thing I've found works is to have a sort of "surprise new hire review." It doesn't matter if you've never done it before...he won't know that. Just sit him down and tell him your are going to go over his performance and show him some real numbers...take your normal review sheet (you do have a formal review process right?) and change the heading if you want so it looks official and show him where he stands. If you show him in a very formal setting that not doing tests is adversely affecting his performance rating as opposed to just "nagging" him about it, he will hopefully get the point. You've got to show him that his actions will actually affect him be it pay wise or otherwise.
I know, you may want to stay away from doing this because it's not official... but I think you are within reason to do it and it's probably going to be a whole lot cheaper than having to fire him and recruit someone new.
As a junior programmer myself, I thought that Id reveal what it was like when I found myself in a similar situation to your junior developer.
When I first came out of uni, I found that it had severly un equipped me to deal with the real world. Yes I knew some JAVA basics and some philosophy (don't ask) but that was about it. When I first got my job it was a little daunting to say the least. Let me tell you I was probably one of the biggest cowboys around, I would hack together a little bug fix / algorithm with no comments / testing / documentation and ship it out the door.
I was lucky enough to be under the supervision of a kind and very patient senior programmer. Luckily for me, he decided to sit down with me and spend 1-2 weeks going through my very hacked togethor code. He would explain where I'd gone wrong, the finer points of c and pointers (boy did that confuse me!). We managed to write a pretty decent class/module in about a week. All I can say is that if the senior dev hadn't invested the time to help me along the right path, I probably wouldn't have lasted very long.
Happily, 2 years down the line, I would hope that some of my collegues might even consider me an average programmer.
Take home points
Most Universities are very bad at preparing students for the real world
Paired programming really helped me. Thats not to say that it will help everyone but it worked for me.
Assign them to projects that don't require "top quality stable code" if that's your concern and let the jr. developer fail. Have them be the one to 'come in on the weekend' to fix their bugs. Have lunch a lot and talk about software development practices (not lectures, but discussions). In time they will acquire and develop the best practices to do the tasks they are assigned.
Who knows, they might even come up with something better than the techniques your team currently uses.
If the junior programmer, or anyone, doesn't see the value in testing, then it will be hard to get them to do it...period.
I would have made the junior programmer sacrifice their weekend to fix the bug. His actions (or lack there of) are not affecting him directly. Also, make it apparent, that he will not see advancement and/or pay increases if he doesn't improve his skills in testing.
In the end, even with all your help, encouragement, mentoring, he might not be a fit for your team, so let him go and look for someone who does get it.
I second RodeoClown's comment about code reviewing every commit. Once he's done it a fair few times he'll get in the habit of testing stuff.
I don't know if you need to block commits like that though.
At my workplace everyone has free commit to everything, and all SVN commit messages (with diffs) are emailed to the team.
Note: you really want the thunderbird colored-diffs addon if you plan on doing this.
My boss or myself (the 2 'senior' coders) will end up reading over the commits, and if there's any stuff like "you forgot to add unit tests" we just flick an email or go and chat to the person, explaining why they needed unit tests or whatever. Everyone else is encouraged to read the commits too, as it's a great way of seeing what's going on, but the junior devs don't comment so much.
You can help encourage people to get into the habit of this by periodically saying things like "Hey, bob, did you see that commit I did this morning, I found this neat trick where you can do blah blah whatever, read the commit and see how it works!"
NB: We have 2 'senior' devs and 3 junior ones. This may not scale, or you might need to adjust the process a bit with more developers.
It's his Mentor's responsibility to Teach him/her. How well are you teaching him/her HOW to test. Are you pair programming with him? The Junior more than likely doesn't know HOW to set up a good test for xyz.
As a Junior freshout of school he knows many Concepts. Some technique. Some experience. But in the end, all a Junior is POTENTIAL. Almost every feature they work on, there will be something new that they have never done before. Sure the Junior may have done a simple State pattern for a project in class, opening and shutting "doors", but never a real world application of the patterns.
He/she will only be as good as how well you teach. If they were able to "Just get it" do you think they would have taken a Junior position in the first place?
In my experience Juniors are hired and given almost same responsibility as Seniors, but are just paid less and then ignored when they start to falter. Forgive me if i seem bitter, it's 'cause i am.
Make code coverage part of the reviews.
Make "write a test that exposes the bug" a prerequisite to fixing a bug.
Require a certain level of coverage before code can be checked in.
Find a good book on test-driven development and use it to show how test-first can speed development.
Lots of psychology and helpful "mentoring" techniques but, in all honestly, this just boils down to "write tests if you want to still have a job, tomorrow."
You can couch it in whatever terms you think are appropriate, harsh or soft, it doesn't matter. But the fact is, programmers are not paid to just throw together code & check it in -- they're paid to carefully put together code, then put together tests, then test their code, THEN check the whole thing in. (At least that's what it sounds like, from your description.)
Hence, if someone is going to refuse to do their job, explain to them that they can stay home, tomorrow, and you'll hire someone who WILL do the job.
Again, you can do all this gently, if you think that's necessary, but a lot of people just need a big hard slap of Life In The Real World, and you'd be doing them a favor by giving it to them.
Good luck.
Change his job description for a while to solely be writing tests and maintaining tests. I've heard that many companies do this for fresh new inexperienced people for a while when they start.
Additionally, issue a challenge while he's in that role: Write tests that will a) fail on current code a) fulfill the requirements of the software. Hopefully it'll cause him to create some solid and thorough tests (improving the project) and make him better at writing tests for when he re-integrates into core development.
edit> fulfull the requirements of the software meaning that he's not just writing tests to purposely break the code when the code never intended or needed to take that test case into account.
If your colleague lacks experience writing tests maybe he or she is having difficulty testing beyond simple situations, and that is manifesting itself as inadequate testing. Here's what I would try:
Spend some time and share testing techniques, like dependency injection, looking for edge cases, and so on with your colleague.
Offer to answer questions about testing.
Do code reviews of tests for a while. Ask your colleague to review changes of yours that are exemplary of good testing. Look at their comments to see if they're really reading and understanding your test code.
If there are books that fit particularly well with your team's testing philosophy give him or her a copy. It might help if your code review feedback or discussions reference the book so he or she has a thread to pick up and follow.
I wouldn't especially emphasize the shame/guilt factor. It is worth pointing out that testing is a widely adopted, good practice and that writing and maintaining good tests is a professional courtesy so your team mates don't need to spend their weekends at work, but I wouldn't belabor those points.
If you really need to "get tough" then institute an impartial system; nobody likes to feel like they're being singled out. For example your team might require code to maintain a certain level of test coverage (able to be gamed, but at least able to be automated); require new code to have tests; require reviewers to consider the quality of tests when doing code reviews; and so on. Instituting that system should come from team consensus. If you moderate the discussion carefully you might uncover other underlying reasons your colleague's testing isn't what you expect.
# jsmorris
I once had the senior developer and "architect" berate me and a tester(it was my first job out of college) in email for not staying late and finishing such an "easy" task the night before. We had been at it all day and called it quits at 7pm, I had been thrashing since 11am before lunch that day and had pestered every member our team for help at least twice.
I responded and cc'd the team with:
"I've been disappointed in you for a month now. I never get help from the team. I'll be at the coffee shop across the street if you need me. I'm sorry i couldn't debug the 12 parameter, 800 line method that just about everything is dependent on."
After cooling off at the coffee shop for an hour, i went back in the office, grabbed my crap and left. After a few days they called me asking if I was coming in, I said I would but I had an interview, maybe tomorrow.
"So your quitting then?"
On your source repository : use hooks before each commits (pre-commit hook for SVN for example)
In that hook, check for the existence of at least one use case for each method. Use a convention for unit test organisation that you could easily enforce via a pre-commit hook.
On an integration server compile everything and check regularely the test coverage using a test coverage tool. If test coverage is not 100% for a code, block any commit of the developer. He should send you the test case that covers 100% of the code.
Only automatic checks can scale well on a project. You cannot check everything by hand.
The developer should have a mean to check if his test cases covers 100% of the code. That way, if he doesn't commit a 100% tested code, it is his own fault, not a "oops, sorry I forgot" fault.
Remember : People never do what you expect, they always do what you inspect.
First off, like most respondents here point out, if the guy doesn't see the value in testing, there's not much you can do about it, and you've already pointed out that you can't fire the guy. However, failure is not an option here, so what about the few things you can do?
If your organization is large enough to have over 6 developers, I strongly recommend having a Quality Assurance department (even if its just one person to start). Ideally, you should have a ratio of 1 tester to 3-5 developers. The thing about programmers is ... they are programmers, not testers. I have yet to interview a programmer that has been formally taught proper QA techniques.
Most organizations make the fatal flaw of assigning the testing roles to the new-hire, the person with the LEAST amount of exposure to your code -- ideally, the senior developers should be moved to the QA role as they have the experience in the code, and (hopefully) have developed a sixth sense for code smells and failure points that can crop up.
Furthermore, the programmer that made the mistake is probably not going to find the defect because its usually not a syntax error (those get picked up in the compile), but a logic error -- and the same logic is at work when they write the test as when they write the code. Don't have the person who developed the code test that code -- they'll find less bugs than anyone else would.
In your case, if you can afford the redirected work effort, make this new guy the first member of your QA team. Get him to read "Software Testing In The Real World: Improving The Process", because he obviously will need some training in his new role. If he doesn't like it, he'll quit and your problem is still solved.
A slightly less vengeful approach would be let this person do what they are good at (I'm assuming this person got hired because they are actually competent at the programming part of the job) , and hire a tester or two to do the testing (University students often have practicum or "co-op" terms, would love the exposure, and are cheap)
Side Note: Eventually, you'll want the QA team reporting to a QA director, or at least not to a software developer manager, because having the QA team report to the manager who's primary goal is to get the product done is a conflict of interest.
If your organization is smaller than 6, or you can't get away with creating a new team, I recommend paired programming (PP). I'm not a total convert of all the extreme programming techniques, but I'm definitely a believer in paired programming. However, both members of the paired programming team have to be dedicated, or it simply doesn't work. They have to follow two rules: the inspector has to fully understand what is being coded on the screen or he has to ask the coder to explain it; the coder can only code what he can explain -- no "you'll see" or "trust me" or hand-waving will be tolerated.
I only recommend PP if your team is capable of it, because, like testing, no amount of cheering or threatening will persuade a couple of ego-filled introverts to work together if they don't feel comfortable doing so. However, I find that between the choice of writing detailed functional specs and doing code reviews vs. paired programming, the PP usually wins out.
If PP is not for you, then TDD is your best bet, but only if its taken literally. Test Driven Development mean you write the tests FIRST, run the tests to prove they actually do fail, then write the simplest code to make it work. The trade off is now you (should) have a collection of thousands of tests, which is also code, and is just as likely as production code to contain bugs. I'll be honest, I'm not a big fan of TDD, mainly because of this reason, but it works for many developers who would rather write test scripts than test case documents -- some testing is better than none. Couple TDD with PP for a better likelihood of test coverage and less bugs in the script.
If all else fails, have the programmers equivalence of a swear jar -- each time the programmer breaks the build, they have to put $20, $50, $100 (whatever is moderately painful for your staff) into a jar that goes to your favorite (registered!) charity. Until they pay up, shun them :)
All joking aside, the best way to get your programmer to write tests is don't let him program. If you want a programmer, hire a programmer -- If you want tests, hire a tester. I started as a junior programmer 12 years ago doing testing, and it turned into my career path, and I wouldn't trade it for anything. A solid QA department that is properly nurtured and given the power and mandate to improve the software is just as valuable as the developers writing the software in the first place.
This may be a bit heartless, but the way you describe the situation it sounds like you need to fire this guy. Or at least make it clear: refusing to follow house development practices (including writing tests) and checking in buggy code that other people have to clean up will eventually get you fired.
The main reason junior engineers/programmers don't take lots of time to design and perform test scripts, is because most CS certifications do not heavily require this, so other areas of engineering are covered further in college programs, such as design patters.
In my experience, the best way to get the junior professionals into the habit, is to make it part of the process explicitly. That is, when estimating the time an iteration should take, the time of design, write and/or execute the cases should be incorporated into this time estimate.
Finally, reviewing the test script design should be part of a design review, and the actual code should be reviewed in the code review. This makes the programmer liable for doing proper testing of each line of code he/she writes, and the senior engineer and peers liable to provide feedback and guidance on the code and test written.
Based on your comment, "Showing that the design becomes simpler" I'm assuming you guys practice TDD. Doing a code review after the fact is not going to work. The whole thing about TDD is that it's a design and not a testing philosophy. If he didn't write the tests as part of the design, you aren't going to get a lot of benefit from writing tests after the fact - especially from a junior developer. He'll end up missing a whole lot of corner cases and his code will still be crappy.
Your best bet is to have a very patient senior developer to sit with him and do some pair programming. And just keep at it until he learns. Or doesn't learn, in which case you need to reassign him to a task he is better suited to because you will just end up frustrating your real developers.
Not everyone has the same level of talent and/or motivation. Development teams, even agile ones, are made up of people on the "A-Team" and people on "B-Team". A-Team members are the one who architect the solution, write all the non-trivial production code, and communicate with the business owners - all the work that requires thinking outside the box. The B-Team handle things like configuration management, writing scripts, fixing lame bugs, and doing maintenance work - all the work that has strict procedures that have small consequences for failure.