Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a junior programmer that simply doesn't write enough tests.
I have to nag him every two hours, "have you written tests?"
We've tried:
Showing that the design becomes simpler
Showing it prevents defects
Making it an ego thing saying only bad programmers don't
This weekend 2 team members had to come to work because his code had a NULL reference and he didn't test it
My work requires top quality stable code, and usually everyone 'gets it' and there's no need to push tests through. We know we can make him write tests, but we all know the useful tests are those written when you're into it.
Do you know of more motivations?
This is one of the hardest things to do. Getting your people to get it.
Sometimes one of the best ways to help junior level programmers 'get it' and learn the right techniques from the seniors is to do a bit of pair programming.
Try this: on an upcoming project, pair the junior guy up with yourself or another senior programmer. They should work together, taking turns "driving" (being the one typing at they keyboard) and "coaching" (looking over the shoulder of the driver and pointing out suggestions, mistakes, etc as they go). It may seem like a waste of resources, but you will find:
That these guys together can produce code plenty fast and of higher quality.
If your junior guy learns enough to "get it" with a senior guy directing him along the right path (eg. "Ok, now before we continue, lets write at test for this function.") It will be well worth the resources you commit to it.
Maybe also have someone in your group give the Unit Testing 101 presentation by Kate Rhodes, I think its a great way to get people excited about testing, if delivered well.
Another thing you can do is have your Jr. Devs practice the Bowling Game Kata which will help them learn Test Driven Development. It is in java, but could easily be adapted to any language.
Have a code review before every commit (even if it's a 1 minute "I've changed this variable name"), and as part of the code review, review any unit tests.
Don't sign off on the commit until the tests are in place.
(Also - If his work wasn't tested - why was it in a production build in the first place? If it's not tested, don't let it in, then you won't have to work weekends)
For myself, I have started insisting that every bug I find and fix be expressed as a test:
"Hmmm, that's not right..."
Find possible problem
Write a test, show that the code fails
Fix the problem
Show that the new code passes
Loop if the original problem persists
I try to do this even while banging stuff out, and I get done in about the same time, only with a partial test suite already in place.
(I don't live in a commercial programming environment, and am often the only coder working a particular project.)
Imagine I am a mock programmer, named... Marco. Imagine I have graduated school not that long ago, and never really had to write tests. Imagine I work in a company that doesn't really enforce or asks for this. OK? good! Now imagine, that the company is switching to using tests, and they are trying to get me inline with this. I will give somewhat snarky reaction to items mentioned so far, as if I didn't do any research on this.
Let's get this started with the creator:
Showing that the design becomes simpler.
How can writing more, make things simpler. I would now have to keep tabs on getting more cases, and etc. This makes it more complicated if you ask me. Give me solid details.
Showing it prevents defects.
I know that. This is why they are called tests. My code is good, and I checked it for issues, so I don't see where those tests would help.
Making it an ego thing saying only bad programmers don't.
Ohh, so you think I am a bad programmer just because I don't do as much used testing. I'm insulted and positively annoyed at you. I would rather have assistance and support than sayings.
#Justin Standard: On start of new propect pair the junior guy up with yourself or another senior programmer.
Ohh, this is so important that resources will be spent making sure I see how things are done, and have some assist me on how things are done. This is helpful, and I might just start doing it more.
#Justin Standard: Read Unit Testing 101 presentation by Kate Rhodes.
Ahh, that was an interesting presentation, and it made me think about testing. It hammered some points in that I should consider, and it might have swayed my views a bit.
I would love to see more compelling articles, and other tools to assist me in getting in line with thinking this is the right way to do things.
#Dominic Cooney: Spend some time and share testing techniques.
Ahh, this helps me understand what is expected of me as far as techniques, and it puts more items in my bag of knowledge, that I might use again.
#Dominic Cooney: Answer questions, examples and books.
Having a point person (people) to answer question is helpful, it might make me more likely to try. Good examples are great, and it gives me something to aim for, and something to look for reference. Books that are relevant to this directly are great reference.
#Adam Hayle: Surprise Review.
Say what, you sprung something that I am completely unprepared for. I feel uncomfortable with this, but will do my best. I will now be scared and mildly apprehensive of this coming up again, thank you. However, the scare tactic might have worked, but it does have a cost. However, if nothing else works, this might just be the push that is needed.
#Rytmis: Items are only considered done when they have test cases.
Ohh, interesting. I see I really do have to do this now, otherwise I'm not completing anything. This makes sense.
#jmorris: Get Rid / Sacrifice.
glares, glares, glares - There is a chance I might learn, and with support, and assistance, I can become a very important and functional part of the teams. This is one of my handicaps now, but it won't be for long. However, if I just don't get it, I understand that I will go. I think I will get it.
In the end, the support of my team with play a large part in all this. Having a person take their time to assist, and get me started into good habits is always welcome. Then, afterward having a good support net would be great. It would always be appreciated to have someone come a few times afterward, and go over some code, to see how everything is flowing, not in a review per se, but more as a friendly visit.
Reasoning, Preparing, Teaching, Follow up, Support.
I've noticed that a lot of programmers see the value of testing on a rational level. If you've heard things like "yeah, I know I should test this but I really need to get this done quickly" then you know what I mean. However, on an emotional level they feel that they get something done only when they're writing the real code.
The goal, then, should be to somehow get them to understand that testing is in fact the only way to measure when something is "done", and thus give them the intrinsic motivation to write tests.
I'm afraid that's a lot harder than it should be, though. You'll hear a lot of excuses along the lines of "I'm in a real hurry, I'll rewrite/refactor this later and then add the tests" -- and of course, the followup never happens because, surprisingly, they're just as busy the next week.
Here's what I would do:
First time out... "we're going to do this project jointly. I'm going to write the tests and you're going to write the code. Pay attention to how I write the tests, coz that's how we do things around here and that's what I'll expect of you."
Following that... "You're done? Great! First let's look at the tests that are driving your development. Oh, no tests? Let me know when that is done and we'll reschedule looking at your code. If you're needing help to formulate the tests let me know and I'll help you."
He's already doing this. Really. He just doesn't write it down. Not convinced? Watch him go through the standard development cycle:
Write a piece of code
Compile it
Run to see what it does
Write the next piece of code
Step #3 is the test. He already does testing, he just does it manually. Ask him this question: "How do you know tomorrow that the code from today still works?" He will answer: "It's such a little amount of code!"
Ask: "How about next week?"
When he hasn't got an answer, ask: "How would you like your program to tell you when a change breaks something that worked yesterday?"
That's what automatic unit testing is all about.
As a junior programmer, I'm still trying to get into the habit of writing tests. Obviously it's not easy to pick up new habits, but thinking about what would make this work for me, I have to +1 the comments about code reviews and coaching/pair programming.
It may also be worth emphasising the long-term purpose of testing: ensuring that what worked yesterday is still working today, and next week, and next month. I only say that because in skimming the answers I didn't see that mentioned.
In doing code reviews (if you decide to do that), make sure your young dev knows it's not about putting him down, it's about making the code better. Because that way his confidence is less likely to get damaged. And that's important. On the other hand, so is knowing how little you know.
Of course, I don't really know anything. But I hope the words have been useful.
Edit: [Justin Standard]
Don't put yourself down, what you have to say is pretty much right on.
On your point about code reviews: what you will find is that not only will the junior dev learn in the process, but so will the reviewers. Everyone in a code review learns if you make it a collaborative process.
Frankly, if you are having to put that much effort into getting him to do something then you may have to come to terms with the thought that he may just not be a good fit for the team, and may need to go. Now, this doesn't necessarily mean firing him... it may mean finding someplace else in the company his skills are more suited. But if there is no place else...you know what to do.
I'm assuming he is also a fairly new hire (< 1 year) and probably recently out of school...in which case he may not be accustomed to how things work in a corporate setting. Things like that most of us could get away with in college.
If this is the case, one thing I've found works is to have a sort of "surprise new hire review." It doesn't matter if you've never done it before...he won't know that. Just sit him down and tell him your are going to go over his performance and show him some real numbers...take your normal review sheet (you do have a formal review process right?) and change the heading if you want so it looks official and show him where he stands. If you show him in a very formal setting that not doing tests is adversely affecting his performance rating as opposed to just "nagging" him about it, he will hopefully get the point. You've got to show him that his actions will actually affect him be it pay wise or otherwise.
I know, you may want to stay away from doing this because it's not official... but I think you are within reason to do it and it's probably going to be a whole lot cheaper than having to fire him and recruit someone new.
As a junior programmer myself, I thought that Id reveal what it was like when I found myself in a similar situation to your junior developer.
When I first came out of uni, I found that it had severly un equipped me to deal with the real world. Yes I knew some JAVA basics and some philosophy (don't ask) but that was about it. When I first got my job it was a little daunting to say the least. Let me tell you I was probably one of the biggest cowboys around, I would hack together a little bug fix / algorithm with no comments / testing / documentation and ship it out the door.
I was lucky enough to be under the supervision of a kind and very patient senior programmer. Luckily for me, he decided to sit down with me and spend 1-2 weeks going through my very hacked togethor code. He would explain where I'd gone wrong, the finer points of c and pointers (boy did that confuse me!). We managed to write a pretty decent class/module in about a week. All I can say is that if the senior dev hadn't invested the time to help me along the right path, I probably wouldn't have lasted very long.
Happily, 2 years down the line, I would hope that some of my collegues might even consider me an average programmer.
Take home points
Most Universities are very bad at preparing students for the real world
Paired programming really helped me. Thats not to say that it will help everyone but it worked for me.
Assign them to projects that don't require "top quality stable code" if that's your concern and let the jr. developer fail. Have them be the one to 'come in on the weekend' to fix their bugs. Have lunch a lot and talk about software development practices (not lectures, but discussions). In time they will acquire and develop the best practices to do the tasks they are assigned.
Who knows, they might even come up with something better than the techniques your team currently uses.
If the junior programmer, or anyone, doesn't see the value in testing, then it will be hard to get them to do it...period.
I would have made the junior programmer sacrifice their weekend to fix the bug. His actions (or lack there of) are not affecting him directly. Also, make it apparent, that he will not see advancement and/or pay increases if he doesn't improve his skills in testing.
In the end, even with all your help, encouragement, mentoring, he might not be a fit for your team, so let him go and look for someone who does get it.
I second RodeoClown's comment about code reviewing every commit. Once he's done it a fair few times he'll get in the habit of testing stuff.
I don't know if you need to block commits like that though.
At my workplace everyone has free commit to everything, and all SVN commit messages (with diffs) are emailed to the team.
Note: you really want the thunderbird colored-diffs addon if you plan on doing this.
My boss or myself (the 2 'senior' coders) will end up reading over the commits, and if there's any stuff like "you forgot to add unit tests" we just flick an email or go and chat to the person, explaining why they needed unit tests or whatever. Everyone else is encouraged to read the commits too, as it's a great way of seeing what's going on, but the junior devs don't comment so much.
You can help encourage people to get into the habit of this by periodically saying things like "Hey, bob, did you see that commit I did this morning, I found this neat trick where you can do blah blah whatever, read the commit and see how it works!"
NB: We have 2 'senior' devs and 3 junior ones. This may not scale, or you might need to adjust the process a bit with more developers.
It's his Mentor's responsibility to Teach him/her. How well are you teaching him/her HOW to test. Are you pair programming with him? The Junior more than likely doesn't know HOW to set up a good test for xyz.
As a Junior freshout of school he knows many Concepts. Some technique. Some experience. But in the end, all a Junior is POTENTIAL. Almost every feature they work on, there will be something new that they have never done before. Sure the Junior may have done a simple State pattern for a project in class, opening and shutting "doors", but never a real world application of the patterns.
He/she will only be as good as how well you teach. If they were able to "Just get it" do you think they would have taken a Junior position in the first place?
In my experience Juniors are hired and given almost same responsibility as Seniors, but are just paid less and then ignored when they start to falter. Forgive me if i seem bitter, it's 'cause i am.
Make code coverage part of the reviews.
Make "write a test that exposes the bug" a prerequisite to fixing a bug.
Require a certain level of coverage before code can be checked in.
Find a good book on test-driven development and use it to show how test-first can speed development.
Lots of psychology and helpful "mentoring" techniques but, in all honestly, this just boils down to "write tests if you want to still have a job, tomorrow."
You can couch it in whatever terms you think are appropriate, harsh or soft, it doesn't matter. But the fact is, programmers are not paid to just throw together code & check it in -- they're paid to carefully put together code, then put together tests, then test their code, THEN check the whole thing in. (At least that's what it sounds like, from your description.)
Hence, if someone is going to refuse to do their job, explain to them that they can stay home, tomorrow, and you'll hire someone who WILL do the job.
Again, you can do all this gently, if you think that's necessary, but a lot of people just need a big hard slap of Life In The Real World, and you'd be doing them a favor by giving it to them.
Good luck.
Change his job description for a while to solely be writing tests and maintaining tests. I've heard that many companies do this for fresh new inexperienced people for a while when they start.
Additionally, issue a challenge while he's in that role: Write tests that will a) fail on current code a) fulfill the requirements of the software. Hopefully it'll cause him to create some solid and thorough tests (improving the project) and make him better at writing tests for when he re-integrates into core development.
edit> fulfull the requirements of the software meaning that he's not just writing tests to purposely break the code when the code never intended or needed to take that test case into account.
If your colleague lacks experience writing tests maybe he or she is having difficulty testing beyond simple situations, and that is manifesting itself as inadequate testing. Here's what I would try:
Spend some time and share testing techniques, like dependency injection, looking for edge cases, and so on with your colleague.
Offer to answer questions about testing.
Do code reviews of tests for a while. Ask your colleague to review changes of yours that are exemplary of good testing. Look at their comments to see if they're really reading and understanding your test code.
If there are books that fit particularly well with your team's testing philosophy give him or her a copy. It might help if your code review feedback or discussions reference the book so he or she has a thread to pick up and follow.
I wouldn't especially emphasize the shame/guilt factor. It is worth pointing out that testing is a widely adopted, good practice and that writing and maintaining good tests is a professional courtesy so your team mates don't need to spend their weekends at work, but I wouldn't belabor those points.
If you really need to "get tough" then institute an impartial system; nobody likes to feel like they're being singled out. For example your team might require code to maintain a certain level of test coverage (able to be gamed, but at least able to be automated); require new code to have tests; require reviewers to consider the quality of tests when doing code reviews; and so on. Instituting that system should come from team consensus. If you moderate the discussion carefully you might uncover other underlying reasons your colleague's testing isn't what you expect.
# jsmorris
I once had the senior developer and "architect" berate me and a tester(it was my first job out of college) in email for not staying late and finishing such an "easy" task the night before. We had been at it all day and called it quits at 7pm, I had been thrashing since 11am before lunch that day and had pestered every member our team for help at least twice.
I responded and cc'd the team with:
"I've been disappointed in you for a month now. I never get help from the team. I'll be at the coffee shop across the street if you need me. I'm sorry i couldn't debug the 12 parameter, 800 line method that just about everything is dependent on."
After cooling off at the coffee shop for an hour, i went back in the office, grabbed my crap and left. After a few days they called me asking if I was coming in, I said I would but I had an interview, maybe tomorrow.
"So your quitting then?"
On your source repository : use hooks before each commits (pre-commit hook for SVN for example)
In that hook, check for the existence of at least one use case for each method. Use a convention for unit test organisation that you could easily enforce via a pre-commit hook.
On an integration server compile everything and check regularely the test coverage using a test coverage tool. If test coverage is not 100% for a code, block any commit of the developer. He should send you the test case that covers 100% of the code.
Only automatic checks can scale well on a project. You cannot check everything by hand.
The developer should have a mean to check if his test cases covers 100% of the code. That way, if he doesn't commit a 100% tested code, it is his own fault, not a "oops, sorry I forgot" fault.
Remember : People never do what you expect, they always do what you inspect.
First off, like most respondents here point out, if the guy doesn't see the value in testing, there's not much you can do about it, and you've already pointed out that you can't fire the guy. However, failure is not an option here, so what about the few things you can do?
If your organization is large enough to have over 6 developers, I strongly recommend having a Quality Assurance department (even if its just one person to start). Ideally, you should have a ratio of 1 tester to 3-5 developers. The thing about programmers is ... they are programmers, not testers. I have yet to interview a programmer that has been formally taught proper QA techniques.
Most organizations make the fatal flaw of assigning the testing roles to the new-hire, the person with the LEAST amount of exposure to your code -- ideally, the senior developers should be moved to the QA role as they have the experience in the code, and (hopefully) have developed a sixth sense for code smells and failure points that can crop up.
Furthermore, the programmer that made the mistake is probably not going to find the defect because its usually not a syntax error (those get picked up in the compile), but a logic error -- and the same logic is at work when they write the test as when they write the code. Don't have the person who developed the code test that code -- they'll find less bugs than anyone else would.
In your case, if you can afford the redirected work effort, make this new guy the first member of your QA team. Get him to read "Software Testing In The Real World: Improving The Process", because he obviously will need some training in his new role. If he doesn't like it, he'll quit and your problem is still solved.
A slightly less vengeful approach would be let this person do what they are good at (I'm assuming this person got hired because they are actually competent at the programming part of the job) , and hire a tester or two to do the testing (University students often have practicum or "co-op" terms, would love the exposure, and are cheap)
Side Note: Eventually, you'll want the QA team reporting to a QA director, or at least not to a software developer manager, because having the QA team report to the manager who's primary goal is to get the product done is a conflict of interest.
If your organization is smaller than 6, or you can't get away with creating a new team, I recommend paired programming (PP). I'm not a total convert of all the extreme programming techniques, but I'm definitely a believer in paired programming. However, both members of the paired programming team have to be dedicated, or it simply doesn't work. They have to follow two rules: the inspector has to fully understand what is being coded on the screen or he has to ask the coder to explain it; the coder can only code what he can explain -- no "you'll see" or "trust me" or hand-waving will be tolerated.
I only recommend PP if your team is capable of it, because, like testing, no amount of cheering or threatening will persuade a couple of ego-filled introverts to work together if they don't feel comfortable doing so. However, I find that between the choice of writing detailed functional specs and doing code reviews vs. paired programming, the PP usually wins out.
If PP is not for you, then TDD is your best bet, but only if its taken literally. Test Driven Development mean you write the tests FIRST, run the tests to prove they actually do fail, then write the simplest code to make it work. The trade off is now you (should) have a collection of thousands of tests, which is also code, and is just as likely as production code to contain bugs. I'll be honest, I'm not a big fan of TDD, mainly because of this reason, but it works for many developers who would rather write test scripts than test case documents -- some testing is better than none. Couple TDD with PP for a better likelihood of test coverage and less bugs in the script.
If all else fails, have the programmers equivalence of a swear jar -- each time the programmer breaks the build, they have to put $20, $50, $100 (whatever is moderately painful for your staff) into a jar that goes to your favorite (registered!) charity. Until they pay up, shun them :)
All joking aside, the best way to get your programmer to write tests is don't let him program. If you want a programmer, hire a programmer -- If you want tests, hire a tester. I started as a junior programmer 12 years ago doing testing, and it turned into my career path, and I wouldn't trade it for anything. A solid QA department that is properly nurtured and given the power and mandate to improve the software is just as valuable as the developers writing the software in the first place.
This may be a bit heartless, but the way you describe the situation it sounds like you need to fire this guy. Or at least make it clear: refusing to follow house development practices (including writing tests) and checking in buggy code that other people have to clean up will eventually get you fired.
The main reason junior engineers/programmers don't take lots of time to design and perform test scripts, is because most CS certifications do not heavily require this, so other areas of engineering are covered further in college programs, such as design patters.
In my experience, the best way to get the junior professionals into the habit, is to make it part of the process explicitly. That is, when estimating the time an iteration should take, the time of design, write and/or execute the cases should be incorporated into this time estimate.
Finally, reviewing the test script design should be part of a design review, and the actual code should be reviewed in the code review. This makes the programmer liable for doing proper testing of each line of code he/she writes, and the senior engineer and peers liable to provide feedback and guidance on the code and test written.
Based on your comment, "Showing that the design becomes simpler" I'm assuming you guys practice TDD. Doing a code review after the fact is not going to work. The whole thing about TDD is that it's a design and not a testing philosophy. If he didn't write the tests as part of the design, you aren't going to get a lot of benefit from writing tests after the fact - especially from a junior developer. He'll end up missing a whole lot of corner cases and his code will still be crappy.
Your best bet is to have a very patient senior developer to sit with him and do some pair programming. And just keep at it until he learns. Or doesn't learn, in which case you need to reassign him to a task he is better suited to because you will just end up frustrating your real developers.
Not everyone has the same level of talent and/or motivation. Development teams, even agile ones, are made up of people on the "A-Team" and people on "B-Team". A-Team members are the one who architect the solution, write all the non-trivial production code, and communicate with the business owners - all the work that requires thinking outside the box. The B-Team handle things like configuration management, writing scripts, fixing lame bugs, and doing maintenance work - all the work that has strict procedures that have small consequences for failure.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our code sucks. Actually, let me clarify that. Our old code sucks. It's difficult to debug and is full of abstractions that few people understand or even remember. Just yesterday I spent an hour debugging in an area that I've worked for over a year and found myself thinking, "Wow, this is really painful." It's not anyone's fault - I'm sure it all made perfect sense initially. The worst part is usually It Just Works...provided you don't ask it to do anything outside of its comfort zone.
Our new code is pretty good. I think we're doing a lot of good things there. It's clear, consistent, and (hopefully) maintainable. We've got a Hudson server running for continuous integration and we have the beginnings of a unit test suite in place. The problem is our management is laser-focused on writing New Code. There's no time to give Old Code (or even old New Code) the TLC it so desperately needs. At any given moment our scrum backlog (for six developers) has about 140 items and around a dozen defects. And those numbers aren't changing much. We're adding things as fast as we can burn them down.
So what can I do to avoid the headaches of marathon debugging sessions mired in the depths of Old Code? Every sprint is filled to the brim with new development and showstopper defects. Specifically...
What can I do to help maintenance and refactoring tasks get high enough priority to be worked?
Are there any C++-specific strategies you employ to help prevent New Code from rotting so quickly?
Your management may be focused on getting working features into the product, and keeping them working. In this case, you will need to make a business case for refactoring the old stuff, in that by X investment of time and effort you can reduce necessary maintenance time by Y over period Z. Or your management may be fundamentally clueless (this happens, but less often than most developers seem to think), in which case you'll never get permission.
You need to see the business point of view. It doesn't matter to the end user whether the code is ugly or elegant, only what the software does. The cost of bad code is potential unreliability and additional difficulty in changing it; the emotional distress it causes to the programmer is rarely considered.
If you can't get permission to go in and refactor, you can always try it on your own, a little bit at a time. Whenever you fix a bug, do a little rewriting to make things clearer. This may turn out to be faster than the minimum possible fix, particularly in verifying that the code now works. Even if it isn't, it's usually possible to take a little more time on a bug fix without getting into trouble. Just don't get carried away.
If you can leave the code just a little better each time you go in, you'll feel a lot better about it.
Stand Up Meetings
I might go to my mechanic, and we have a little stand-up meeting in the morning:
I tell him I want my wheels aligned,
my tires rotated, and my oil changed.
I mention that "Oh by the way my
brakes felt a little soft on the way
in. Could [he] take a look at them?
How soon can I get my car back because
I need to get back to work?"
He pops his head under my car, pops
back up and says my brakes are leaking
oil and starting to fail. He will need
a part that will arrive at 10:30am.
His man won't finish before lunch, but
I should get my car back by 1:30pm or
so. He's booked solid so he won't be
able to do any of the other stuff
today, and I will have to book another
appointment.
I ask if he can do the other stuff and
I come back for the brake. He tells me
he really can't let me drive out of
there without fixing the brakes
because they might cause an accident,
but if I want to go to another
mechanic, he can call for a tow.
Since the car will be done so shortly
after lunch, I ask if his man can take
a late lunch so I can get my car back
an hour earlier.
He tells me his men come in at 8am and
often work into the evening.
They earn every break they
get, and his man deserves to take his
lunch with everyone else.
None of that is what I wanted to hear. I wanted to hear that I would drive out of there in a half hour with my wheels, tires and oil done.
My mechanic was just straight up and honest with me. Are you straight up and honest with your management? Or do you avoid telling them things they don't want to hear?
Unit Testing
I wouldn't touch a line of code I didn't understand, and I wouldn't check in a new line of code I didn't test thoroughly. (At least, not intentionally.)
Your question seems to imply that somehow a large corpus of poorly documented code made it past review without any unit tests. Maybe you participated in that, and maybe you didn't. Everyone involved needs to accept responsibility for that--including management. Regardless, what's done is done. You cannot go back and change it.
However, right now, in the present time, it is everybody's responsibility to stop the behavior that led to the problem in the first place. You say you spent a year working in code that you find difficult to understand and that has no unit tests. During that year, as you worked hard to improve your understanding, how many unit tests did you write to document and to verify that understanding?
As you struggled through the code slowly gaining understanding, how many comments did you add so you wouldn't have to struggle next time?
Scrum Backlog
Personally, I think the term "Scrum backlog" is a misnomer. A list of things to do is just a list--a shopping list if you will. I had a list when I went to the mechanic. My stand up meeting with the mechanic was really more of a sprint planning meeting.
A sprint planning meeting is a negotiation. If your management is time boxing without that negotiation, they aren't managing anything. They are simply trying to cram 10 lbs of shit into a 5 lb sack, and it's your responsibility to tell them so.
When you show up to a sprint planning meeting, you are expected to commit to a body of work, and it's your responsibility to prepare for that. Preparation means having some idea of what you will have to do to complete each item on the list--including the time it takes to understand obscure code and the time it takes to write unit tests.
If someone invites you to a planning meeting where you won't have time to prepare, decline the meeting and suggest when to reschedule so you will have time.
If you have an existing body of code with no unit tests and a feature might conceivably affect the operation of that code, you need to write unit tests for as much of the old code as might be affected. When you commit to writing the feature, you are committing to doing that work. If that leaves you too little time to commit to some other feature, just say so. Don't commit to the other feature.
When you commit to fix a defect, you commit to testing your work. Obviously, that means writing a unit test for the defect. But if it involves old code with no unit tests, it also means writing unit tests for things that aren't broken yet, but might break due to your change. How else will you test the fix?
If your defect list remains a constant size, your team regresses as much as it fixes. Politely explain to whomever needs to understand that unit tests prevent the regressions that currently keep your defect list from shrinking.
If you fail to write those unit tests because you commit to too many features, whose responsibility is that?
Refactoring
When you refactor code, you have to test all of it, and that means writing unit tests for all of it. If you have a large body of code with no unit tests, you will have to write all of those unit tests before you refactor.
I suggest you hold off on refactoring until those unit tests are in place. In the meantime, if you insist on including unit tests in your estimates for the work you commit to, eventually all those unit tests will be there. And then you can refactor.
The one exception to that is refactoring for testability. You may find that some of the code was not designed for test and that you have to refactor for things like dependency injection before you can create your unit tests. When you commit to writing the feature that requires the unit test, you commit to making the code testable. Include that in your estimate when you commit to the feature.
Commitment + Responsibility = Power
You say you are powerless. When you accept responsibility and commit to doing what needs doing, I think you will find you have all the power you need.
P.S. If anyone complains about anybody "wasting time" writing multiple unit tests when fixing a single defect, show them this video on the 80:20 rule and pound "defect clusters" into their brains.
It is hard to tell much from the information you give. Some questions I would have is a logical reason to be writing new code is to replace the old code. If that is what you are doing, abandon the old code.
Is it also old code that has showstopper defects? If so where are they coming from? Old code does not have "showstopper" defects, it just grinds closer and closer to a halt usually. It is old code after all - it should have the same old defects and the same old limitations, not stuff that has to be looked at right away. Showstopper defects are new code defects. It sounds like there is active development on in the old code.
If you are writing all this new code on top of old code that sucks, with no plans to fix it once and for all, sorry, there is only so much you can do when you are too busy burying yourself to dig yourself out.
If the latter is the case. you should recognize where you are headed, and try to detach a little. It is going to all collapse eventually, if you plan on being around save your strength for a worthwhile battle.
In the meantime try to pick up some design patterns. There are several that can at least help shield you new code from the old stuff, but still, ultimately it is just hard to write good code against bad code.
And your sprints sound maybe confused. Is there not an overall direction? That should determine how much backlog you have, although things can change month to month, is there not a clear sense of moving towards some final goal?
And new code rotting? The way you prevent that is you have a meaningful design, a meaningful direction, and a quality team that is committed to both the quality of their work and the vision of the design. If you have that, discipline is what maintains quality. If you don't have that sorry, you basically were writing code with no purpose already. It was basically rotten on the vine.
Not being critical, just trying to be honest. Take a deep breath. Slow down. You seem like you need it. Look at what you have written here. It tells nothing. You talk of refactor, scrums, showstoppers, defects, old code, new code. What does any of that mean? It is all jumbled up.
What about "new initiatives versus legacy systems"? "Need to refactor early sprint cycle code in terms of latest understanding etc." Are showstoppers in fact "Early components of the current enterprise initiatives have been released but are experiencing problems and no time is budgeted because of new development".
These would be meaningful concepts. You've given us nothing. I understand it is intense. My sprints are crazy too, we add a lot of back;pg items because we could not get many requirements up front (a lot of my new requirements result from having to also contend with external regulatory bodies, the normal business process is not always available).
But at the same time I am ground down by the sheer magnitude of what has to be done and the time to do it. Everything that is added to my backlog needs to be there. It is crazy, but at the same time I have a very clear idea of where I have been, where I need to go, and why the road is getter harder.
Step back, clear your thoughts, figure out the same - where you have been and where you are going. Because if you know that, it sure is not obvious. If you cannot communicate anything your peers can understand, how far are you going to get with a business manager?
Old code always sucks. There's probably some rare exceptions written by people with names like Kernighan or Thompson but, for the typical "code written in an office" stuff, over time it's gonna stink. Developers get more experienced. Newer practices, such as continuous integration, change the game. Stuff get's forgotten. New maintainers fail to grasp designs and wish for re-writes. So best accept this as normal.
Some random things that might help...
Talk about it with your team. Share your experiences and your concerns, while avoiding "man your old code sucks" (for obvious reasons) and see what the consensus is. You're probably not alone.
Forget about your managers. Don't expose them to this level of detail - they don't need to think about new vs. old code and probably won't understand if they do. This is a problem for your team to tackle and, if necessary, to make your PO aware of
Be open to the possibility that you may be able to throw stuff out. Some of that old code probably relates to features that are no longer being used or failed to be adopted by users in the first place. To make this work for you, you really need to go a level higher and think in terms of where the code really delivers user or business value vs. where it's just a ball of mud that no one is brave enough to take a decision on. Who dares, wins.
Relax your view of architectural consistency. There's always a way to tap into a working system with new code somewhere, and that may allow you to slowly migrate to a newer, smarter approach, while preserving the old long enough not to break existing things.
Overall, winning in this kind of situation is less about coding skills and much more about smart choices and handling the human aspects.
Hope that helps.
I recommend keeping track of how many bugs and code changes involve your "old code" and present this to either your manager or to your fellow developers at your next team meeting. With this in hand it should be simple enough to convince them that more needs to be done to refactor your "old code" and bring it up to par with your "new code".
It would also be prudent to document the parts of your "old code" that are most difficult to understand. These would also be the parts of your "old code" that you should be refactoring first once you get the approval.
Something to try: group your class into - say - worst 10%, best 10%, and the rest. Deliver the lists to your management, saying, "I predict the majority of bugs over the next quarter will be found in the first set." Based on length, cyclomatic complexity, test coverage - whatever tools are handy and comfortable to you. Then sit back and watch - and be right. Now you've got some credibility, some leverage when you say, "I'd like to invest some resources in making our bad code better, to reduce bugs and maintenance costs - and I know where to invest that energy, see?"
You could create diagrams and sketches of how the new code works and how the classes and functions are related to one another. You could use FreeMind or maybe Dia. And I definitely agree with Documenting and commenting your code.
I once had a problem with this too. I wrote a font class for J2ME for my own language. It was awful for these reasons that maybe you might also see in your code.
No Comments or documentation
Less object oriented
bad variable / function names
...
But after a few months I was forced to write the whole thing again. Now I've learned to use meaningful variable names that are sometimes VERY long. write comments more than writing codes. And using diagrams for the project's classes and their relationships.
I don't know If it was a real answer but it definitely worked for me. and for old codes you might actually have to reread the whole thing and add comments when you remember the functionalities.
Hope it helped.
Talk to your Product Owner! Explain that time invested in refactoring the old code will bring him benefit of higher team velocity on new features once this obstacle is removed.
Other than the approaches mentioned above which are good, you can also try these:
For keeping future code clean
Try pair programming, at least for parts that make sense. It's an effective way of getting reviewed, refactored code a practice.
Try to get refactoring onto the definition of "done". Then it will be part of the estimation process and allotted accordingly. So the definition of done might include: coded, unit tested, functionally tested, performance tested, code reviewed, refactored, and integrated (or something like this).
For Cleaning up the old code:
Unit tests are great for helping you refactor and figure out how things work.
I agree with the comments that a business case needs to be made for large-scale refactoring. But, small-scale refactoring could be easily included in the estimate and will provide immediate return. i.e.: I spend 2 hours rewriting a piece but I would have spent that time looking for bugs anyway.
You may also want to consider getting the product owner and scrummaster to capture a separate velocity for the old code vs the new code, and use that accordingly.
If there's a desired new feature and you can delineate a non-overwhelming hunk of code that is in the way, then you might be able to get management's blessing to replace the old code with new code that has the desired new feature. When I did this, I had to write a somewhat ugly shim layer to meet the old interfaces of the part of the software I wasn't going to touch. And a test harness that could exercise the existing code and exercise the new code to make sure the new code, as seen through the shim layer, could fool the rest of the application into thinking nothing had changed. By reworking the portion we reworked, we were able to show huge performance benefits, compatibility with desired new hardware, reduction in each of our field site's needs for expertise in administering space for the application - and the new code was much more maintainable. That last point mattered not a whit to the users, but the other advantages from the rework were attractive enough to "sell" the users on the merits of a somewhat painful database conversion.
Another more modest success story: we had a decent trouble tracking system that had literally years of history. There was a subsystem of our application that was famed for the speed with which it would burn out maintenance programmers. Clearly (well, clearly in my mind) it was in need of a major re-write, but management wasn't enthused about that. We were able to dig through the history in the trouble tracking data to show the staffing level that had gone into maintaining this module, and for all that effort, the trouble tickets per month against that module continued to arrive at a constant rate. When faced with actual data like that, even the reluctant managers who had long been tight-fisted about staffing re-work of that subsystem could see the merit of assigning staff to rework that module.
The approach as before was to leave the input and output of that module alone. The good news was that throwing virtual memory at the new code with its fancy new data structures did give a noticeable performance improvement to the module. The bad news is that we were nearly done with the re-implementation before we really understood what was wrong in the original implementation such that it did work most of the time, but managed to fail on some of the transactions on some days. The first cut faithfully reproduced those bugs, but the bugs were easier to understand in the reworked code so we now had a shot at really fixing the real problem. In retrospect, maybe we'd have been smarter to have captured data that produced the problems and have taken better care to make sure the reworked version didn't reproduce that problem. But, the truth is, nobody understood the problem until we were quite far along on the re-write. So, the re-write gave improved performance to the users and improved understanding to the current programmers, such that the real problem could really be resolved at last.
A fail example: There was yet another incredibly ugly module that persistently was a sore spot. Alas, I wasn't clever enough to be able to understand the defacto interfaces to this particular wretched hive of scum and villainy, at least not in the time frame of the nominal release schedule. I'd like to believe that given more time we could have figured out a suitable plan for re-working that piece of the system too, and maybe once we understood it, we could even identify user-desired improvements that we could fit into the re-write. But I can't promise that you'll find a prize in every box. If the box is entirely obscure to you, slicing away a chunk of it and replacing that piece with clean code is hard to do. The guy who had charge of that module is probably the one who was best positioned to figure out a plan of attack, but he saw the frequent crashes and calls from the field for assistance as "job security". I don't think management ever really recognized that he needed to be eased aside for someone with a hunger for change, but that's what probably was needed.
Drew
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.
I'm just now learning to programming at age 17. It's hard for me to talk to other programmers as I'm just out of high school (which means I can't take programming courses). I know that I write terrible code, and not like Jeff Atwood terrible code, my code actually sucks. So where can I post some of my code and get real programmers to review it. I know if I had a question I could ask it on StackOverflow, but I want to post a whole class and get a review on it.
The real problem here is that I'm not going to be writing the next great piece of Software. I'm going to be writing a really useless class, which will serve no other purpose than to teach me how to program. This code will never be used, ever! EVER! How can I get an advanced (or even intermediate) programmer to look at my code?
Thanks in advance! ;-)
Look to the open source community. There are plenty of existing and new projects that would love an eager (if inexperienced) developer to offer support.
Going this route offers two advantages:
You get to see great code in action and learn from it
Any changes you submit will be reviewed by an experienced developer and they will often give you excellent suggestions as to how to improve your code before it will be accepted
Start by choosing a project in your language (there are a bunch in c++) and check out the code. You don't need to understand it all, but you must be able to understand at least a portion of it.
If the project looks way to complicated, keep looking. Younger projects tend to have less code that you need to learn.
If you can't get great programmers to look at your code, do the next best thing: look at theirs!
Look for a bunch of code snippets that do the same (simple) thing. Before you look at them too closely, write your own code to perform the same task. Compare all of the snippets with your own (and each other!) and try to figure out the reasons for the differences.
I recommend looking for code from well established projects. Code from tutorials often ignores important details for the sake of simplicity.
Why don't you try RefactorMyCode?
I would try not to write useless code, but attempt to solve some particular problem. Your learning will be more advanced if you are learning in the context of a real-world scenario. It doesn't have to be a big business domain; could even be a game or a shareware utility.
As for getting your code reviewed, the open source community is a good way to go as The Lame Duck says - in fact you're guaranteed it gets some form of review if you actually contribute to a project. Other avenues to explore: your local C++ users' group, checking out a co-op program available through a junior college, or engaging someone in a company that sponsors interns.
I haven't tried sites such as RefactorMyCode as suggested by Gilad Naor, but that seems promising. And, yes, StackOverflow is a good place for bite-sized chunks of code. If you do that, explain what you are trying to do, and why you are trying to do it that way, and ask if there's a better approach. Good luck!
I think the best way to learn is the way I learned (I may be biased): trial and error. I just wrote programs all the time, teaching myself as I went. I'd write terrible code, and I would wrestle with making it do what I wanted. Often it would make me give up on that particular project. But on the next project, I'd take a different approach, and it would work better. Repeat ad nauseam. Once you know where the rough spots are in your designs, you'll be able to ask specific questions on places like SO, or, better yet IMHO, come up with better designs yourself. I independently invented all the major design patterns just through frustration at the solutions I'd created in the past. I think this gives me a valuable perspective, since for most people design patterns are just a "best practice", but I know the pain that comes with using other designs, and I can see signs of bad designs in code very easily (it takes one to know one). This last skill is one that I often see lacking in other programmers... they can't see why their design is deficient and they should use something else.
You could always try a site like Project Euler, where there are a whole load of problems that will test your skills and a whole bunch of solutions to those problems, submitted by others. Project Euler tends to focus on algorithms rather than higher level programming constructs, but I imagine that there are others in a similar vein.
Do something fun and don't worry too much about code style yet. I started out with BASIC on Commodore 64 without even realizing that there was such a thing as clean code vs dirty code. If I had worried a lot about that then, it might have hindered me from progressing. You always learn best when doing it playfully.
Maybe a bit late, but since StackExchange has Code Review, it worth the answer:
Code Review Stack Exchange is a question and answer site for peer
programmer code reviews. It's 100% free, no registration required.
Here is the link: Code Review Stack Exchange
This question already has answers here:
Closed 13 years ago.
Duplicate:
Why should I practice Test Driven Development and how should I start?
For a developer that doesn't know about Test-Driven Development, what problem(s) will be solved by adopting TDD?
[EDIT] Let's assume that the developer already (ab)uses a unit testing framework.
Here are three reasons that TDD might help a developer/team:
Better understanding of what you're going to write
Enforces the policy of writing tests a little better
Speeds up development
One reason to write the tests first is to have a better understanding of the actual code before you write it. To me, this is the main benefit of test driven development. When you write the test cases first, you think more critically about the corner cases. It's then easier to address them when you write the code and ensure that they're accurate.
Another reason is to actually enforce writing the tests. Often when people do unit-testing without the TDD, they have a testing framework set up, write some new code, and then quit. They think that the code already works just fine, so why write tests? It's simple enough that it won't break, right? But now you've lost the advantages of doing unit-tests in the first place (completely different discussion). Write them first, and they're already there.
Writing these tests first could mean that you don't need to launch the program in a debugging environment (slow — especially for larger projects) to test if a few small things work. Of course there's no excuse for not doing so before committing changes.
Convincing yourself or other people to write the tests first may be difficult. You may have better luck getting them to write both at the same time which may be just as beneficial.
Presumably you test code that you've written before you commit it to a repository.
If that's not true you have other issues to deal with.
If it is true, you can look at writing tests using a framework as a way to automate those main routines or drivers that you currently write so you can run all of them automatically at the push of a button. You don't have to pore over output to decide if the test passed or failed; you embed the success or failure of the test in the code and get a thumbs up or down decision right away. Running all the tests at once reduces the chances of a "whack a mole" situation where you fix something in one class and break something else. All the tests have to pass.
Sounds good so far, yes?
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
If you're already using a test framework, getting good value out of the tests you write, and have meaningful code coverage up around 70%, then I think you're doing well. I'm not sure that TDD will give you much more value. It's up to you to decide whether or not you go that extra mile. Personally, I don't do it. I write tests after the class and refactor if I feel the need. Some people might find it helpful to write the test first knowing it'll fail, but I don't.
(This is more of a comment agreeing with duffymo's answer than an answer of its own.)
duffymo answers:
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
I think it's actually to force coders to think about what their code is doing. Having to think about a test makes one consider what the code is supposed to do: what the pre-conditions and post-conditions are, which functions are primitive and which are composed of primitive functions, what the minimal necessary public interface is, and what's an implementation detail.
These are all things I routinely think about, so like you, "test first" doesn't add a whole lot, for me. And frankly (I know this is heresy in some circles) I like to "anchor" the core ideas of a class by sketching out the public interface first; that way I can look at it, mentally use it, and see if it's as clean as I thought it was. (A class or a library should be easy and intuitive for client programmers to use.)
In other words, I do what TDD tries to ensure happens by writing tests first, but like duffymo, I get there a different way.
And the real point of "test first" is to get a coder to pause and think like a designer. It's silly to make a fetish of how the programmer enters that state; for those who don't do it naturally, "test first" serves as a ritual to get them there. For those who do, "test first" doesn't add much -- and can get in the way of the programmer's habitual way of getting into that state.
Again, we want to look at results, not rituals. If a junior guy needs a ritual, a "stations of the cross" or a rosary* to "get in the groove", "test first" serves that purpose. If someone has their own way to get there, that's great too.
Note that I'm not saying that code shouldn't be tested. It should. It gives us a safety net, which in turn allows us to concentrate our attention on writing good code, even audacious code, because we know the net is there to catch errors.
All I am saying is that fetishistic insistence on "test first" confuses the method (one of many) with the goal, making the programmer think about what he's coding.
* To be ecumenical, I'll note that both Catholics and Muslims use rosaries. And again, it's a mechanical, muscle-memory way to put oneself into a certain frame of mind. It's a fetish (in the original sense of a magic object, not the "sexual fetish" meaning) or good-luck charm. So is saying "Om mani padme hum", or sitting zazen, or stroking a "lucky" rabbit's foot, (Not so lucky for the rabbit.) The philosopher Jerry Fodor, when thinking about hard problems, has a similar ritual: he repeats to himself, "C'mon, Jerry, you can do it!" (I tried that too, but since my name is not Jerry, it didn't work for me. ;) )
Ideally:
You won't waste time writing features you don't need. You'll have a comprehensive unit test suite to serve as a safety net for refactoring. You'll have executable examples of how your code is intended to be used. Your development flow will be smoother and faster; you'll spend less time in the debugger.
But most of all, your design will be better. Your code will be better factored - loosely coupled, highly cohesive - and better formed - smaller, better-named methods & classes.
For my current project (which runs on a relatively heavyweight process), I have adopted a peculiar form of TDD that consists of writing skeleton test cases based on requirements documents and GUI mockups. I write dozens, sometimes hundreds of those before starting to implement anything (this runs totally against "pure" TDD which says you should write a few tests, then immediately start on a skeleton implementation).
I have found this to be an excellent way to review the requirements documents. I have to think about the behaviour described in them much more intensively than if I just were to read them . In consequence, I find many more inconsistencies and gaps in them which I would otherwise only have found during implementation. This way, I can ask for clarification earlier and have better requirements when I start implementing.
Then, during implementation, the tests are a way to measure how far I've yet to go. And they prevent me from forgetting anything (don't laugh, that's a real problem when you work on larger use cases).
And the moral is: even when your dev process doesn't really support TDD, it can still be done in a way, and improve quality and productivity.
I personally do not use TDD, but one of the biggest pro's I can see with the methology is that customer satisfaction ensurance. Basically, the idea is that the steps of your development process are these:
1) Talk to customer about what the application is supposed to do, and how it is supposed to react to different situations.
2) Translate the outcome of 1) into Unit Tests, which each test one feature or scenario.
3) Write simple, "sloppy" code that (barely) passes the tests. When this is done, you have met your customer's expectations.
4) Refactor the code you wrote in 3) until you think you've done it in the most effective way possible.
When this is done you have hopefully produced high-quality code, that meets your customer's needs. If the customer now wants a new feature, you start the cycle over - discuss the feature, write a test that makes sure it works, write code that passes the test, refactor.
And as others have said, each time you run your tests you ensure that the old code still works, and that you can add new functionality without breaking old one.
Most of the people I have talked to don't use a complete TDD model. They usually find the best testing model that works for them. Find yours play with TDD and find where you are the most productive.
TDD (Test Driven Development/ Design) provides the following advantages
ensures you know the story card's acceptance criteria before you start
ensures that you know when to stop coding (i.e., when the acceptance criteria has been meet thus prevents gold platting)
As a result you end up with code that is
testable
clean design
able to be refactored with confidence
the minimal code necessary to satisfy the story card
a living specification of how the code works
able to support a sustainable pace of new features
I made a big effort to learn TDD for Ruby on Rails development. It took several days before I really got into it and it. I was very skeptical but I made the effort because programmers I respect support it.
At this point I feel it was definitely worth the effort. There are several benefits which I'm sure others will be happy to list for you. To me the most important advantage is that it helps avoid that nightmare situation late in a project where something suddenly breaks for no apparent reason and then you're spending a day and a half with the debugger. It helps prevent your code base from deteriorating as you add more and more logic to it.
It is common knowledge that writing tests and having a large number of automated tests are a Good Thing.
However, without TDD, it often just becomes tedious. People write tests, and then leave it, and the tests do not get updated as they should, nor do new features get tested as often as they should either.
A big part of this is because the code has become a pain to test - TDD will influence your design so that it is much easier to test. Because you've used TDD, you have a good number of tests, which makes it much easier to find regressions whenever your code or requirements change, simplifying debugging drammatically, causing an appreciation of good TDD and encouraging more tests to be written when changes are needed - and we're back to the start of the cycle.
There are many advantages:
Higher code quality
Fewer bugs
Less wasted time
Any of those alone would be sufficient justification to implement TDD.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been test-infected for a long time now, but it would seem the majority of developers I work with either have never tried it or dismiss it for one reason or another, with arguments typically being that it adds overhead to development or they don't need to bother.
What bothers me most about this is that when I come along to make changes to their code, I have a hard time getting it under test as I have to apply refactorings to make it testable and sometimes end up having to do a lot of work just so that I can test the code I'm about to write.
What I want to know is, what arguments would you use to persuade other developers to start writing unit tests? Most developers I've introduced to it take to it quite well, see the benefits and continue to use it. This always seems to be the good developers though, who are already interested in improving the quality of their code and hence can see how unit testing does this.
How do persuade the rest of the motley crew? I'm not looking for a list of testing benefits as I already know what these are, but what techniques you have used or would use to get other people on board. Tips on how to persuade management to take an active role are appreciated as well
There's more than one side to that question, I guess. I find that actually convincing developers to starting using tests is not that hard, because the list of advantages of using testing often speaks for itself. When that said, it is quite a barrier to actually get going and I find that the learning curve often is a bit steep – especially for novice coders. Throwing testing frameworks, TDD test-first mentality, and mocking framework at someone who's not yet comfortable with neither C#, .Net or programming in general, could be just too much to handle.
I work as a consultant and therefore I often have to address the problem of implementing TDD in an organization. Luckily enough, when companies hire me it is often because of my expertise in certain areas, and therefore I might have a little advantage when it comes to getting people’s attention. Or maybe it's just that it's a bit easier to for me as an outsider to come in to a new team and say "Hi! I've tried TDD on other projects and I know that it works!" Or maybe it's my persuasiveness/stubbornness? :) Either way, I often don't find it very hard to convince devs to start writing tests. What I find hard though, is to teach them how to write good unit tests. And as you point out in your question; to stay on the righteous path.
But I have found one method that I think works pretty well when it comes to teaching unit testing. I've blogged about it here, but the essence is to sit down and do some pair-programming. And doing the pair programming I start out writing the unit test first. This way I show them a bit how the testing framework work, how I structure the tests and often some use of mocking. Unit tests should be simple, so all in all the test should be fairly easy to understand even for junior devs. The worst part to explain is often the mocking, but using easy-to-read mocking frameworks like Moq helps a lot. Then when the test is written (and nothing compiles or passes) I hand over the keyboard to my fellow coder so that (s)he can implement the functionality. I simply tell her/him; "Make it go green!” Then we move on to the next test; I write the test, the 'soon-to-be-test-infected-dev' next to me writes the functionality.
Now, it's important to understand that at this point the dev(s) you are teaching are probably not yet convinced that this is the right way to code. The point where most devs seem to see the (green) light is when a test fails due to some code changes that they never thought would break any functionality. When the test that covers that functionality blows up, that's when you've got yourself a loyal TDD'er on your team. Or that's at least my experience, but as always; your mileage will vary :)
Quality speaks for itself. If you're more successful than everyone else, that's all you need to say.
Use a test-coverage tool. Make it very visible. That way everybody can easily see how much code in each area is passed, failed and untested.
Then you may be able to start a culture where "untested" is a sign of bad coding, "failed" is a sign of work in progress and "passed" is a sign of finished code.
This works best if you also do "test-first". Then "untested" becomes "you forgot step 1".
Of course you don't need 100% test coverage. But of one area has 1% coverage and another has 30%, you have a metric for which area is most likely to fail in production.
Lead by example. If you can get evidence that there are less regression on unit tested code that elsewhere.
Getting QA and management buy-in so that your process mandates unit testing.
Be ready to help others to get started with unit testing: provide assistance, supply a framework so that they can start easily, run an introductory presentation.
You just have to get used to the mantra "if it ain't tested, the work ain't done!"
Edit: To add some more meat to my facetious comment above, how can someone know if they're actually finished if they haven't tested their work?
Mind you, you will have a battle convincing others if time isn't allowed in the estimate for the testing of the devleoped code.
A one-to-one split for between effort for coding and that for testing seems to be a good number.
HTH
cheers,
Rob
Give compliments for one writes more test and produce good results and show the best one to others and ask them to produce the same or better result than this.
People (and processes) don't change without one or more pain points. So you need to finjd the significant pain points and demonstrate how unit testing might help deal with them.
If you can't find any significant pain points, then unit testing may not add a lot of value to your current process.
As Steve Lott implies, delivering better results than the other team members will also help. But without the pain points, my experience is that people won't change.
Two ways: convince the project manager that unit testing improves quality AND saves time overall, then have him make unit tests mandatory.
Or wait for a development crunch just before an important release date, where everyone has to work overtime and weekends to finish the last features and eliminate the last bugs, only to find they've just introduced more bugs. Then point out that with proper unit tests they wouldn't have to work like that.
Another situation where unit tests can be shown as indispensable is when a release was actually delivered and turns out to contain a serious bug due to last-minute changes.
If developers are seeing that the "successful" developers are writing unit tests, and they are still not doing it then I suggest unit tests should become part of the formal development life-cycle.
E.g. nothing can be checked in until a unit test is written and reviewed.
Probably reefnet_alex' answer helps you:
Is Unit Testing worth the effort?
I think it was Fowler who said:
"Imperfect tests, run frequently, are
much better than perfect tests that
are never written at all". I
interprate this as giving me
permission to write tests where I
think they'll be most useful even if
the rest of my code coverage is
woefully incomplete.
You mentioned that your manager is on board with unit tests. If that's the case, then why isn't he (she) enforcing it? It isn't your job to get everybody else to follow along or to teach them and in fact, other developers will often resent you if you try to push it on them. In order to get your fellow developers to write unit tests, the manager has to emphasize it strongly. It might end up that part of that emphasis is education on unit test implementation which you might end up being the educator and that's great, but management of it is everything.
If you're in an environment where the group decides the style of implementation, then you have more of a say in how the group dynamic should be. If you are in that sort of environment and the group doesn't want to emphasize unit tests while you do, then maybe you're in the wrong group/company.
I have found that "evangelizing" or preaching rarely works. As others have said, do it your way for your own code, make it known that you do it, but don't try to force others to do it. If people ask about it be supportive and helpful. Offer to do a few lunch-time seminars or informal dog and pony shows. That will do a lot more than just complaining to your manager or the other developers that you have a hard time writing tests for code they wrote.
Slow and steady - it is not going to change overnight.
Once I realized that at one place where I worked the acceptance for peer reviews improved tremendously. My group just did it and stopped trying to get others to do it. Eventually people started asking s about how we got some of the success we did. Then it was easier.
We have a test framework which includes automated running of the test suite whenever anyone commits a change. If someone commits code that fails the tests, the whole team gets emailed with the errors.
This leads to introduced bugs being fixed pretty quickly.