How to prove my stakeholder and manager my software works? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do software engineers encounter after another stressfull release? Well, the first thing we encounter in our group are the bugs we have released out in the open. The biggest problem that we as software engineers encounter after a stressfull release is spaghetti-code, also called the big ball of mud.
The time and money to chase perfection are seldom available, nor should they be. To survive, we must do what it takes to get our software working and out the door on time. Indeed, if a team completes a project with time to spare, today’s managers are likely to take that as a sign to provide less time and money or fewer people the next time around.
You need to deliver quality software on time, and under budget
Cost: Architecture is a long-term investment. It is easy for the people who are paying the bills to dismiss it, unless there is some tangible immediate benefit, such a tax write-off, or unless surplus money and time happens to be available. Such is seldom the case. More often, the customer needs something working by tomorrow. Often, the people who control and manage the development process simply do not regard architecture as a pressing concern. If programmers know that workmanship is invisible, and managers don't want to pay for it anyway, a vicious circle is born.
But if this was really the case than each longterm software project would eventually always lead to a big ball of mud.
We know that does not, always, happen. How come? Because the statement that managers do not regard architecture as a pressing concern is false. At least nowadays. Managers in the IT field very well know that maintainability is key to the business.
business becomes dependent upon the data driving it. Businesses have become critically dependent on their software and computing infrastructures. There are numerous mission critical systems that must be on-the-air twenty-four hours a day/seven days per week. If these systems go down, inventories can not be checked, employees can not be paid, aircraft cannot be routed, and so on. [..]
Therefore it is at the heart of the business to seek ways to keep systems far away from the big ball of mud. That the system is still maintainable. That the system actually works and that you, as programmer can prove it does. Does your manager ask you if you have finished your coding today, does she ask you if the release that has fixes A, B and C can be done today or does she ask if the software that will be released actually works? And have you proved it works? With what?
Now for my question:
What ways do we have to prove our managers and/or stakeholders that our software works? Are those green lights of our software-unit tests good enough? If yes, won't that only prove our big-ball of mud is still doing what we expect it to do? That the software is maintainable? How can you prove your design is right?
[Added later]
Chris Pebble his answer below is putting my team on the right track. The Quality Assurance is definitely the thing we are looking for. Thanks Chris. Having a QA policy agreed with the stakeholders is than the logical result of what my team is looking for.
The follow-up question is what should all be in that QA policy?
Having that buildserver running visible for my stakeholders
Having the buildserver not only 'just build' but adding tests that were part of the QA policy
Having an agreement from my stakeholders on our development process (where developers review each others code is part of)
more..
Some more information: The team I'm leading is building webservices that are consumed by other software teams. That is why a breaking webservice is immediately costing money. When the developers of the presentationlayer team, or the actual testers can't move forward we are in immediate stress and have to fix bugs ASAP, which in turn lead to quick hacks..
[ added later ]
Thanks for all the answers. It is indeed about 'trust'. We cannot do a release if the software is not trusted by the stakeholders, who are actively testing our software themselves using the website that is consuming our webservice.
When issues arise, the first question of our testers is: Is it a servicelayer problem or a presentationlayer problem? Which directs me to have a QA policy that ensures that our software is ok for the tests they are doing.
So, the only way I can (now) envision enabling trust with testers is to:
- Talk with the current test-team, go over the tests that they are able to manually execute (from their test-script and scenario's) and make sure that our team has those tests as unit-tests already checked against our webservice. That would be a good starting point for a 'sign-off' before we do a release that the presentationlayerteam has to integrate. It will take some effort to clarify that creating automatic tests for all those scenario's will take some time. But it will definately be usefull to ensure what we build is actually working.

I'm part of a team working on a large project for a governmental client.
The first module of phase 1 was a huge disaster, the team was un-managed, there were'nt a QA team and the developers were not motivated to work better. Instead, the manager kept shouting and deducting wages for people who didnt work overtime!!
The client, huh, dont ask about that, they were really pissed off, but they stayed with our company because they know that no one understand the business as we do.
So what was the solution:
First thing separated the management from programmers, and put a friendly team leader.
Second, get a qualified QA team. In the first few weeks, bugs were in 100s.
Third, put 2-3 developers as support team, there responsibility is not to do any new task, just fix bugs, work directly with the QA.
Fourth, motivate the guys, sometimes its not about the money or extra vacations, sometimes a good word will be perfect. Small example, after working 3 days in row for almost 15 hours a day, the team leader made a note to the manager. Two days later i received a letter from the CEO thanking me on my efforts and giving me 2 vacation days.
We will soon deliver the 4th module of the system, and as one of the support team i could say its at least 95% bug free. Which is a huge jump from our first modules.
Today we have a powerful development team, qualified QA and expert bug fixers.
Sorry for the long story, but thats how our team (during 4 months) proved to the manager and client that we are reliable, and just need the right environment.

You cant prove it beyond the scope of the tests, and unless there is a bulletproof specification (which there never is) then the tests never prove anything beyond the obvious.
What you can do as a team is approach your software design in a responsible manner and not give in to the temptations of writing bad code to please managers, demanding the necessary resources and time constraints, and treating the whole process as much as a craft as a job. The finest renaissance sculptors knew no-one would see the backs statues to be placed in the corners of cathedrals but still took the effort to make sure they weren't selling themselves short.
As a team the only way to prove your software is reliable is to build up a track record: do things correctly from the start, fix bugs before implementing new features, never give in to the quick hack fix, and make sure everyone shares the same enthusiasm and respect for the code.

In all but trivial cases, you cannot 'prove' that your software is correct.
That's the role of User Acceptance Testing: to show that that an acceptable level of usefulness has been reached.

I think this is putting the cart before the horse. It's tantamount to a foot soldier trying to explain to the general what battle maneuvers are and why protecting your flanks is important. If management can't tell the difference between quality code and a big ball of mud, you're always going to end up delivering a big ball of mud.
Unfortunately it is completely impossible to "prove" that your software is working bug-free (the windows xp commercials always annoyed me by announcing "the most secure version of windows ever", that's impossible to prove at release). It's up to management to setup and enforce a QA process and establish metrics as to what a deliverable product actually looks like and what level of bugs or unexpected behaviors is acceptable in the final release.
That said, if you're a small team and set your own QA policies with little input from management I think it would be advantageous to write out a basic QA process and have management sign off on it. For our web apps we currently support 4 browsers -- and management knows this -- so when the application breaks in some obscure handheld browser everyone clearly understands that this is not something we designed the application to support. It also provides good leverage for hiring additional development or test resources when management decides it wants to start testing for x.

As Billy Joel once say, "It's always been a matter of trust."
You need to understand that software development is "black magic" to all except those who write software. It's not obvious (actually, quite counter intuitive) to to the rest of your company that many of your initiatives lead to increasing quality and reducing risk of running over time and/or budget.
The key is to build a trusting, respectful relationship between development and other parts of the business. How do you build this trust? Well that's one of those touchy-feely people problems... you will need to experiment a little. I use the following tools often:
Process visibility - make sure everyone knows what you are doing and how things are progressing. Also, make sure everyone can see the impact of and changes that occur during development.
Point out the little victories - to build up trust, point out when things went exactly as you planned. Try to find situations where you had to make a judgement call and use the term "mitigated the risk" with your management.
Do not say, "I told you so." - let's say that you told management that you needed 2 months to undertake some task and they say, "well you only have three weeks." The outcome is not likely to be good (assuming your estimate is accurate). Make management aware of the problem and document all the things you had to do to try to meet the deadline. When quality is shown to be poor, you can work through the issues you faced (and recorded) in a professional manner rather than pointing the finger and saying, "I told you so."
If you have a good relationship with you manager, you can suggest that they read some books specific to software development so they can understand industry best practices.
Also, I would point out to your boss that not allowing you to act as a professional software developers is hurting your career. You really want to work somewhere that lets you grow professionally rather than somewhere that turns you into a hacker.

Related

Our code sucks and I'm powerless to fix it. Help! [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our code sucks. Actually, let me clarify that. Our old code sucks. It's difficult to debug and is full of abstractions that few people understand or even remember. Just yesterday I spent an hour debugging in an area that I've worked for over a year and found myself thinking, "Wow, this is really painful." It's not anyone's fault - I'm sure it all made perfect sense initially. The worst part is usually It Just Works...provided you don't ask it to do anything outside of its comfort zone.
Our new code is pretty good. I think we're doing a lot of good things there. It's clear, consistent, and (hopefully) maintainable. We've got a Hudson server running for continuous integration and we have the beginnings of a unit test suite in place. The problem is our management is laser-focused on writing New Code. There's no time to give Old Code (or even old New Code) the TLC it so desperately needs. At any given moment our scrum backlog (for six developers) has about 140 items and around a dozen defects. And those numbers aren't changing much. We're adding things as fast as we can burn them down.
So what can I do to avoid the headaches of marathon debugging sessions mired in the depths of Old Code? Every sprint is filled to the brim with new development and showstopper defects. Specifically...
What can I do to help maintenance and refactoring tasks get high enough priority to be worked?
Are there any C++-specific strategies you employ to help prevent New Code from rotting so quickly?
Your management may be focused on getting working features into the product, and keeping them working. In this case, you will need to make a business case for refactoring the old stuff, in that by X investment of time and effort you can reduce necessary maintenance time by Y over period Z. Or your management may be fundamentally clueless (this happens, but less often than most developers seem to think), in which case you'll never get permission.
You need to see the business point of view. It doesn't matter to the end user whether the code is ugly or elegant, only what the software does. The cost of bad code is potential unreliability and additional difficulty in changing it; the emotional distress it causes to the programmer is rarely considered.
If you can't get permission to go in and refactor, you can always try it on your own, a little bit at a time. Whenever you fix a bug, do a little rewriting to make things clearer. This may turn out to be faster than the minimum possible fix, particularly in verifying that the code now works. Even if it isn't, it's usually possible to take a little more time on a bug fix without getting into trouble. Just don't get carried away.
If you can leave the code just a little better each time you go in, you'll feel a lot better about it.
Stand Up Meetings
I might go to my mechanic, and we have a little stand-up meeting in the morning:
I tell him I want my wheels aligned,
my tires rotated, and my oil changed.
I mention that "Oh by the way my
brakes felt a little soft on the way
in. Could [he] take a look at them?
How soon can I get my car back because
I need to get back to work?"
He pops his head under my car, pops
back up and says my brakes are leaking
oil and starting to fail. He will need
a part that will arrive at 10:30am.
His man won't finish before lunch, but
I should get my car back by 1:30pm or
so. He's booked solid so he won't be
able to do any of the other stuff
today, and I will have to book another
appointment.
I ask if he can do the other stuff and
I come back for the brake. He tells me
he really can't let me drive out of
there without fixing the brakes
because they might cause an accident,
but if I want to go to another
mechanic, he can call for a tow.
Since the car will be done so shortly
after lunch, I ask if his man can take
a late lunch so I can get my car back
an hour earlier.
He tells me his men come in at 8am and
often work into the evening.
They earn every break they
get, and his man deserves to take his
lunch with everyone else.
None of that is what I wanted to hear. I wanted to hear that I would drive out of there in a half hour with my wheels, tires and oil done.
My mechanic was just straight up and honest with me. Are you straight up and honest with your management? Or do you avoid telling them things they don't want to hear?
Unit Testing
I wouldn't touch a line of code I didn't understand, and I wouldn't check in a new line of code I didn't test thoroughly. (At least, not intentionally.)
Your question seems to imply that somehow a large corpus of poorly documented code made it past review without any unit tests. Maybe you participated in that, and maybe you didn't. Everyone involved needs to accept responsibility for that--including management. Regardless, what's done is done. You cannot go back and change it.
However, right now, in the present time, it is everybody's responsibility to stop the behavior that led to the problem in the first place. You say you spent a year working in code that you find difficult to understand and that has no unit tests. During that year, as you worked hard to improve your understanding, how many unit tests did you write to document and to verify that understanding?
As you struggled through the code slowly gaining understanding, how many comments did you add so you wouldn't have to struggle next time?
Scrum Backlog
Personally, I think the term "Scrum backlog" is a misnomer. A list of things to do is just a list--a shopping list if you will. I had a list when I went to the mechanic. My stand up meeting with the mechanic was really more of a sprint planning meeting.
A sprint planning meeting is a negotiation. If your management is time boxing without that negotiation, they aren't managing anything. They are simply trying to cram 10 lbs of shit into a 5 lb sack, and it's your responsibility to tell them so.
When you show up to a sprint planning meeting, you are expected to commit to a body of work, and it's your responsibility to prepare for that. Preparation means having some idea of what you will have to do to complete each item on the list--including the time it takes to understand obscure code and the time it takes to write unit tests.
If someone invites you to a planning meeting where you won't have time to prepare, decline the meeting and suggest when to reschedule so you will have time.
If you have an existing body of code with no unit tests and a feature might conceivably affect the operation of that code, you need to write unit tests for as much of the old code as might be affected. When you commit to writing the feature, you are committing to doing that work. If that leaves you too little time to commit to some other feature, just say so. Don't commit to the other feature.
When you commit to fix a defect, you commit to testing your work. Obviously, that means writing a unit test for the defect. But if it involves old code with no unit tests, it also means writing unit tests for things that aren't broken yet, but might break due to your change. How else will you test the fix?
If your defect list remains a constant size, your team regresses as much as it fixes. Politely explain to whomever needs to understand that unit tests prevent the regressions that currently keep your defect list from shrinking.
If you fail to write those unit tests because you commit to too many features, whose responsibility is that?
Refactoring
When you refactor code, you have to test all of it, and that means writing unit tests for all of it. If you have a large body of code with no unit tests, you will have to write all of those unit tests before you refactor.
I suggest you hold off on refactoring until those unit tests are in place. In the meantime, if you insist on including unit tests in your estimates for the work you commit to, eventually all those unit tests will be there. And then you can refactor.
The one exception to that is refactoring for testability. You may find that some of the code was not designed for test and that you have to refactor for things like dependency injection before you can create your unit tests. When you commit to writing the feature that requires the unit test, you commit to making the code testable. Include that in your estimate when you commit to the feature.
Commitment + Responsibility = Power
You say you are powerless. When you accept responsibility and commit to doing what needs doing, I think you will find you have all the power you need.
P.S. If anyone complains about anybody "wasting time" writing multiple unit tests when fixing a single defect, show them this video on the 80:20 rule and pound "defect clusters" into their brains.
It is hard to tell much from the information you give. Some questions I would have is a logical reason to be writing new code is to replace the old code. If that is what you are doing, abandon the old code.
Is it also old code that has showstopper defects? If so where are they coming from? Old code does not have "showstopper" defects, it just grinds closer and closer to a halt usually. It is old code after all - it should have the same old defects and the same old limitations, not stuff that has to be looked at right away. Showstopper defects are new code defects. It sounds like there is active development on in the old code.
If you are writing all this new code on top of old code that sucks, with no plans to fix it once and for all, sorry, there is only so much you can do when you are too busy burying yourself to dig yourself out.
If the latter is the case. you should recognize where you are headed, and try to detach a little. It is going to all collapse eventually, if you plan on being around save your strength for a worthwhile battle.
In the meantime try to pick up some design patterns. There are several that can at least help shield you new code from the old stuff, but still, ultimately it is just hard to write good code against bad code.
And your sprints sound maybe confused. Is there not an overall direction? That should determine how much backlog you have, although things can change month to month, is there not a clear sense of moving towards some final goal?
And new code rotting? The way you prevent that is you have a meaningful design, a meaningful direction, and a quality team that is committed to both the quality of their work and the vision of the design. If you have that, discipline is what maintains quality. If you don't have that sorry, you basically were writing code with no purpose already. It was basically rotten on the vine.
Not being critical, just trying to be honest. Take a deep breath. Slow down. You seem like you need it. Look at what you have written here. It tells nothing. You talk of refactor, scrums, showstoppers, defects, old code, new code. What does any of that mean? It is all jumbled up.
What about "new initiatives versus legacy systems"? "Need to refactor early sprint cycle code in terms of latest understanding etc." Are showstoppers in fact "Early components of the current enterprise initiatives have been released but are experiencing problems and no time is budgeted because of new development".
These would be meaningful concepts. You've given us nothing. I understand it is intense. My sprints are crazy too, we add a lot of back;pg items because we could not get many requirements up front (a lot of my new requirements result from having to also contend with external regulatory bodies, the normal business process is not always available).
But at the same time I am ground down by the sheer magnitude of what has to be done and the time to do it. Everything that is added to my backlog needs to be there. It is crazy, but at the same time I have a very clear idea of where I have been, where I need to go, and why the road is getter harder.
Step back, clear your thoughts, figure out the same - where you have been and where you are going. Because if you know that, it sure is not obvious. If you cannot communicate anything your peers can understand, how far are you going to get with a business manager?
Old code always sucks. There's probably some rare exceptions written by people with names like Kernighan or Thompson but, for the typical "code written in an office" stuff, over time it's gonna stink. Developers get more experienced. Newer practices, such as continuous integration, change the game. Stuff get's forgotten. New maintainers fail to grasp designs and wish for re-writes. So best accept this as normal.
Some random things that might help...
Talk about it with your team. Share your experiences and your concerns, while avoiding "man your old code sucks" (for obvious reasons) and see what the consensus is. You're probably not alone.
Forget about your managers. Don't expose them to this level of detail - they don't need to think about new vs. old code and probably won't understand if they do. This is a problem for your team to tackle and, if necessary, to make your PO aware of
Be open to the possibility that you may be able to throw stuff out. Some of that old code probably relates to features that are no longer being used or failed to be adopted by users in the first place. To make this work for you, you really need to go a level higher and think in terms of where the code really delivers user or business value vs. where it's just a ball of mud that no one is brave enough to take a decision on. Who dares, wins.
Relax your view of architectural consistency. There's always a way to tap into a working system with new code somewhere, and that may allow you to slowly migrate to a newer, smarter approach, while preserving the old long enough not to break existing things.
Overall, winning in this kind of situation is less about coding skills and much more about smart choices and handling the human aspects.
Hope that helps.
I recommend keeping track of how many bugs and code changes involve your "old code" and present this to either your manager or to your fellow developers at your next team meeting. With this in hand it should be simple enough to convince them that more needs to be done to refactor your "old code" and bring it up to par with your "new code".
It would also be prudent to document the parts of your "old code" that are most difficult to understand. These would also be the parts of your "old code" that you should be refactoring first once you get the approval.
Something to try: group your class into - say - worst 10%, best 10%, and the rest. Deliver the lists to your management, saying, "I predict the majority of bugs over the next quarter will be found in the first set." Based on length, cyclomatic complexity, test coverage - whatever tools are handy and comfortable to you. Then sit back and watch - and be right. Now you've got some credibility, some leverage when you say, "I'd like to invest some resources in making our bad code better, to reduce bugs and maintenance costs - and I know where to invest that energy, see?"
You could create diagrams and sketches of how the new code works and how the classes and functions are related to one another. You could use FreeMind or maybe Dia. And I definitely agree with Documenting and commenting your code.
I once had a problem with this too. I wrote a font class for J2ME for my own language. It was awful for these reasons that maybe you might also see in your code.
No Comments or documentation
Less object oriented
bad variable / function names
...
But after a few months I was forced to write the whole thing again. Now I've learned to use meaningful variable names that are sometimes VERY long. write comments more than writing codes. And using diagrams for the project's classes and their relationships.
I don't know If it was a real answer but it definitely worked for me. and for old codes you might actually have to reread the whole thing and add comments when you remember the functionalities.
Hope it helped.
Talk to your Product Owner! Explain that time invested in refactoring the old code will bring him benefit of higher team velocity on new features once this obstacle is removed.
Other than the approaches mentioned above which are good, you can also try these:
For keeping future code clean
Try pair programming, at least for parts that make sense. It's an effective way of getting reviewed, refactored code a practice.
Try to get refactoring onto the definition of "done". Then it will be part of the estimation process and allotted accordingly. So the definition of done might include: coded, unit tested, functionally tested, performance tested, code reviewed, refactored, and integrated (or something like this).
For Cleaning up the old code:
Unit tests are great for helping you refactor and figure out how things work.
I agree with the comments that a business case needs to be made for large-scale refactoring. But, small-scale refactoring could be easily included in the estimate and will provide immediate return. i.e.: I spend 2 hours rewriting a piece but I would have spent that time looking for bugs anyway.
You may also want to consider getting the product owner and scrummaster to capture a separate velocity for the old code vs the new code, and use that accordingly.
If there's a desired new feature and you can delineate a non-overwhelming hunk of code that is in the way, then you might be able to get management's blessing to replace the old code with new code that has the desired new feature. When I did this, I had to write a somewhat ugly shim layer to meet the old interfaces of the part of the software I wasn't going to touch. And a test harness that could exercise the existing code and exercise the new code to make sure the new code, as seen through the shim layer, could fool the rest of the application into thinking nothing had changed. By reworking the portion we reworked, we were able to show huge performance benefits, compatibility with desired new hardware, reduction in each of our field site's needs for expertise in administering space for the application - and the new code was much more maintainable. That last point mattered not a whit to the users, but the other advantages from the rework were attractive enough to "sell" the users on the merits of a somewhat painful database conversion.
Another more modest success story: we had a decent trouble tracking system that had literally years of history. There was a subsystem of our application that was famed for the speed with which it would burn out maintenance programmers. Clearly (well, clearly in my mind) it was in need of a major re-write, but management wasn't enthused about that. We were able to dig through the history in the trouble tracking data to show the staffing level that had gone into maintaining this module, and for all that effort, the trouble tickets per month against that module continued to arrive at a constant rate. When faced with actual data like that, even the reluctant managers who had long been tight-fisted about staffing re-work of that subsystem could see the merit of assigning staff to rework that module.
The approach as before was to leave the input and output of that module alone. The good news was that throwing virtual memory at the new code with its fancy new data structures did give a noticeable performance improvement to the module. The bad news is that we were nearly done with the re-implementation before we really understood what was wrong in the original implementation such that it did work most of the time, but managed to fail on some of the transactions on some days. The first cut faithfully reproduced those bugs, but the bugs were easier to understand in the reworked code so we now had a shot at really fixing the real problem. In retrospect, maybe we'd have been smarter to have captured data that produced the problems and have taken better care to make sure the reworked version didn't reproduce that problem. But, the truth is, nobody understood the problem until we were quite far along on the re-write. So, the re-write gave improved performance to the users and improved understanding to the current programmers, such that the real problem could really be resolved at last.
A fail example: There was yet another incredibly ugly module that persistently was a sore spot. Alas, I wasn't clever enough to be able to understand the defacto interfaces to this particular wretched hive of scum and villainy, at least not in the time frame of the nominal release schedule. I'd like to believe that given more time we could have figured out a suitable plan for re-working that piece of the system too, and maybe once we understood it, we could even identify user-desired improvements that we could fit into the re-write. But I can't promise that you'll find a prize in every box. If the box is entirely obscure to you, slicing away a chunk of it and replacing that piece with clean code is hard to do. The guy who had charge of that module is probably the one who was best positioned to figure out a plan of attack, but he saw the frequent crashes and calls from the field for assistance as "job security". I don't think management ever really recognized that he needed to be eased aside for someone with a hunger for change, but that's what probably was needed.
Drew

Is Scrum possible without test driven development? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have now witnessed two companies move to agile development with Scrum.
In both cases the standard of coding was good enough when each part of the application was only being worked on by one or two developers with the developers spending a reasonable amount of time working on one part of the application before moving to the next task. The defect rates were also reasonable.
However with Scrum the developers are expected:
to all be able to work on all the bits of the application.
to only work on one area of the application for a few days at most before moving to the next area
to mostly work on code they did not write
Code qualities became an issue in both of the Scrum projects.
So is there a way to do Scrum that does not lead to these problems, without first getting all the developers to do test driven development?
Have you seen Scrum work well on a large project without test driven development? (If so how?)
I'd like to expand on what Dan said.
It's a very common misconception that Scrum / Agile dictates software engineering principles. This is a fallacy for many reasons. As Dan mentioned, Scrum is a software management process, NOT a software engineering process. That being said, very often you will see many engineering principles associated with Scrum; methodologies such as TDD, XP, etc tend to complement the management methodology that Scrum promotes, but are not required.
The reason that CI, TDD, and other engineering practices are so often found hand-in-hand with Scrum is that in general, many are good practices to follow no matter what management methodology is used.
I'd like to address a couple other fallacies in your OP:
However with Scrum the developers are expected:
* to all be able to work on all the bits of the application.
* to only work on one area of the application for a few days at most before
moving to the next area
* to mostly work on code they did not write
As mentioned above, Scrum doesn't dictate what type or kind of work a developer works on. The developers themselves decide on what work to commit themselves to; if a database-heavy dev wants to only work on the DAL and associated stories, there's no reason that they cannot.
Again, Scrum doesn't dictate anything about how to build the application, so your second point is moot (see point 1).
This is a fallacy, since there is nothing that says a developer should only work on code that isn't theirs, or anything about how a developer should develop. If a developer on a Scrum team finds his/herself only working on others' code, that would be coincidental, not because of the scrum process itself.
See this question/answer for more information on the qualities generally expected in a developer working Scrum.
Yes, Scrum describes the software management approach. The program and project management paradigm should not dictate whether or not you use test driven development.
TDD is a software development practice or technique and although it works well with Scrum I don't think it will make or break your success with the practice.
I have personally seen Scrum work well on medium sized projects without a test driven approach to development. That is not to say we didn't write automated tests, they just were not always written first.
Regarless of the use of Scrum, what you were seeing was a change from a code ownership approach to a communal code approach. In order for that to work, there has to be a process change which supports it. One such possibility is TDD. There are others (Automated unit testing even if doesn't drive design coupled with Code Reviews, strong design communication, greater design up front, not developing on code without first pairing with the original author on the code, and more I'm sure you could think of).
Communal approaches work in smaller communities (in large ones it can degenerate into a tragedy of the commons) with a high sense of cohesion between the members.
We do Scrum at work, but we don't practice TDD. Nothing in the Scrum "guidelines" tells you that you have to use TDD. In fact, most agile practices are merely a recommendation insofar that they have proven to work well in agile environments (or even non-agile ones) without them being a must if you want to implement Scrum.
We do write lots and lots of unit and integration tests to avoid extensive manual testing and ensure that later changes in the code doesn't lead to any unpredicted side effects. But that's not TDD. It's mostly a sensible approach to ensure good code and software quality.
Please note that not implementing TDD was not an "active" decision, it just happened. We are encouraged to "write tests first", e.g. when fixing a bug, so it's kind of a voluntary situation-driven not-obligatory way of getting a feeling for TDD by applying it on a regular basis, but it is not mandatory.
Like others said: Scrum is a framework which can hold whatever practices you want to enforce in your development team. Some agile practices come naturally, because they generally make sense, but which ones you want to use and which ones you don't want is up to you.
Yes, Scrum is entirely possible and in most cases implemented without utilizing a TDD approach.
However, the flexibility that TDD provides is certainly something that a Scrum methodology can benefit from.
The main project I work on uses a Scrum approach but the embedded nature of our project makes test-driven development (the way that most people do it) impractical.
I think the problem you encountered was that the expectations of the programmers changed, not that the management process went to a Scrum-style system. If programmers are constantly being shuffled around to parts of the code they are not familiar with, investigate the process behind how tasks are being delegated relative to the old method. Are tasks being assigned to the developer who knows that area the best or are they going to the developer with the shortest to-do list? Is there a long backlog of to-do items for one part of the code and a scarcity of to-do items for another part? If you want to keep developers focused on the areas they excel in, then project management will want to adjust sprint lengths and task priorities to make sure that the workload can be distributed as desired and still be feasible given the time constraints of the sprint.
The Scrum framework is pretty small, it defines a few meetings, ideal lengths of iteration, responsibilities of product owner, scrum master... and maybe a bit more.
However, once we have started our iteration there is nothing in Scrum that dictates how and when a developer(s) should develop something. There is only the committment it will be 'done' by the end of the sprint.
Scrum is about the team committing to produce results, and the team being empowered to decide how to do so. If that means 1 dev per user story, great. If that means 3 devs per user story thats fine also. Whatever way the scrum team believe to be the best way to do the job is what should happen.
To answer the question, yes Scrum is possible without test driven development.
TDD is a worthwhile/recommended practice but the results will vary depending on team/context. For example was TDD in place at the start of a project or are you trying to inject the methodology in at a later date.
There's some confusion about Scrum here.
Scrum per se doesn't tell you how/when to do technical things like TDD (that's a forever moving target). Scrum tells you how/when to manage the people things that happen on a project. It is a overall project management technique, not a construction management technique.
If your manager wishes to do the three things listed above during your sprints, that's fine, but that isn't part of the framework of Scrum. Those are for construction management, which isn't Scrum. It may be used in your Scrum-framework-bounded project, but it isn't in the official Scrum framework.
I think it's easy to be confused about this, though. Agile techniques like Scrum are usually evangelized by people on the 'net who are all about pushing buzzwords and 'happy shiny' things, and as such don't always understand/communicate well. At least, that's how Agile techniques were introduced to me, by an Agile apologist. It took me a good 6 months before I got past the hype / confusing terminology and figured out what they were talking about.
While there are parts I agree to above, I truly believe that you cannot deliver small increments of Code (Scrum for software) without Testing period.
How do you know your sprint didn't break the last 4? How can you guarantee deliverables if you don't know that you didn't break anything in the past.
While I do agree SCRUM is a management process and that TDD is a software process. You must have some way to be able to verify that you did not move backwards.
SCRUM teaches you that daily deliverables and always moving forward at the expense of speed.
When someone says I want to do CI or Agile or Scrum, to me this automatically means there needs to be unit testing (not integration testing as mentioned above) but rather each individual moving part has it's own unit tested.
If you do an integration test you are not testing each individual part in a self contained way. Therefore all you prove is that the method called in one flow works rather than each possible branch you would see in the MSIL

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

Inform potential clients about security vulnerabilities?

We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this.
I think this is a fairly subjective decision and different prospects would react differently if you told them.
I think an idea might be to let them know after they have given business to someone else.
At least this way, the ex-prospect will not think that you are trying to pressure them into giving you the business.
I think the problem with this would be, that it would be quite hard to do checks on XSS without messing up their site. Also, things like SQL injection could be quite dangerous. If you stuck with appending selects, you might not have too much of a problem, but then the question is, how do you know it's even executing the injected SQL?
From the way you described it, it seems like a poor business practice that could be a beneficial one with some modification.
First off, any vulnerability assessment or penetration test you conduct on a customer should be agreed upon in writing by that customer, period. This covers your actions legally. Without a written agreement, if you inadvertently cause damage (application crash, denial-of-service, data leak, etc) during your inspection, you are liable and could be charged (under US law; other countries have different standards).
Even if you do not cause damage, a clueless or potentially malicious customer could take you to court claiming damages; a clueless judge might just award them.
If you have written authorization to do so, then a free vulnerability assessment to attract potential customers sounds like a show of good faith and demonstrates what you want -- your skills.

How to make junior programmers write tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a junior programmer that simply doesn't write enough tests.
I have to nag him every two hours, "have you written tests?"
We've tried:
Showing that the design becomes simpler
Showing it prevents defects
Making it an ego thing saying only bad programmers don't
This weekend 2 team members had to come to work because his code had a NULL reference and he didn't test it
My work requires top quality stable code, and usually everyone 'gets it' and there's no need to push tests through. We know we can make him write tests, but we all know the useful tests are those written when you're into it.
Do you know of more motivations?
This is one of the hardest things to do. Getting your people to get it.
Sometimes one of the best ways to help junior level programmers 'get it' and learn the right techniques from the seniors is to do a bit of pair programming.
Try this: on an upcoming project, pair the junior guy up with yourself or another senior programmer. They should work together, taking turns "driving" (being the one typing at they keyboard) and "coaching" (looking over the shoulder of the driver and pointing out suggestions, mistakes, etc as they go). It may seem like a waste of resources, but you will find:
That these guys together can produce code plenty fast and of higher quality.
If your junior guy learns enough to "get it" with a senior guy directing him along the right path (eg. "Ok, now before we continue, lets write at test for this function.") It will be well worth the resources you commit to it.
Maybe also have someone in your group give the Unit Testing 101 presentation by Kate Rhodes, I think its a great way to get people excited about testing, if delivered well.
Another thing you can do is have your Jr. Devs practice the Bowling Game Kata which will help them learn Test Driven Development. It is in java, but could easily be adapted to any language.
Have a code review before every commit (even if it's a 1 minute "I've changed this variable name"), and as part of the code review, review any unit tests.
Don't sign off on the commit until the tests are in place.
(Also - If his work wasn't tested - why was it in a production build in the first place? If it's not tested, don't let it in, then you won't have to work weekends)
For myself, I have started insisting that every bug I find and fix be expressed as a test:
"Hmmm, that's not right..."
Find possible problem
Write a test, show that the code fails
Fix the problem
Show that the new code passes
Loop if the original problem persists
I try to do this even while banging stuff out, and I get done in about the same time, only with a partial test suite already in place.
(I don't live in a commercial programming environment, and am often the only coder working a particular project.)
Imagine I am a mock programmer, named... Marco. Imagine I have graduated school not that long ago, and never really had to write tests. Imagine I work in a company that doesn't really enforce or asks for this. OK? good! Now imagine, that the company is switching to using tests, and they are trying to get me inline with this. I will give somewhat snarky reaction to items mentioned so far, as if I didn't do any research on this.
Let's get this started with the creator:
Showing that the design becomes simpler.
How can writing more, make things simpler. I would now have to keep tabs on getting more cases, and etc. This makes it more complicated if you ask me. Give me solid details.
Showing it prevents defects.
I know that. This is why they are called tests. My code is good, and I checked it for issues, so I don't see where those tests would help.
Making it an ego thing saying only bad programmers don't.
Ohh, so you think I am a bad programmer just because I don't do as much used testing. I'm insulted and positively annoyed at you. I would rather have assistance and support than sayings.
#Justin Standard: On start of new propect pair the junior guy up with yourself or another senior programmer.
Ohh, this is so important that resources will be spent making sure I see how things are done, and have some assist me on how things are done. This is helpful, and I might just start doing it more.
#Justin Standard: Read Unit Testing 101 presentation by Kate Rhodes.
Ahh, that was an interesting presentation, and it made me think about testing. It hammered some points in that I should consider, and it might have swayed my views a bit.
I would love to see more compelling articles, and other tools to assist me in getting in line with thinking this is the right way to do things.
#Dominic Cooney: Spend some time and share testing techniques.
Ahh, this helps me understand what is expected of me as far as techniques, and it puts more items in my bag of knowledge, that I might use again.
#Dominic Cooney: Answer questions, examples and books.
Having a point person (people) to answer question is helpful, it might make me more likely to try. Good examples are great, and it gives me something to aim for, and something to look for reference. Books that are relevant to this directly are great reference.
#Adam Hayle: Surprise Review.
Say what, you sprung something that I am completely unprepared for. I feel uncomfortable with this, but will do my best. I will now be scared and mildly apprehensive of this coming up again, thank you. However, the scare tactic might have worked, but it does have a cost. However, if nothing else works, this might just be the push that is needed.
#Rytmis: Items are only considered done when they have test cases.
Ohh, interesting. I see I really do have to do this now, otherwise I'm not completing anything. This makes sense.
#jmorris: Get Rid / Sacrifice.
glares, glares, glares - There is a chance I might learn, and with support, and assistance, I can become a very important and functional part of the teams. This is one of my handicaps now, but it won't be for long. However, if I just don't get it, I understand that I will go. I think I will get it.
In the end, the support of my team with play a large part in all this. Having a person take their time to assist, and get me started into good habits is always welcome. Then, afterward having a good support net would be great. It would always be appreciated to have someone come a few times afterward, and go over some code, to see how everything is flowing, not in a review per se, but more as a friendly visit.
Reasoning, Preparing, Teaching, Follow up, Support.
I've noticed that a lot of programmers see the value of testing on a rational level. If you've heard things like "yeah, I know I should test this but I really need to get this done quickly" then you know what I mean. However, on an emotional level they feel that they get something done only when they're writing the real code.
The goal, then, should be to somehow get them to understand that testing is in fact the only way to measure when something is "done", and thus give them the intrinsic motivation to write tests.
I'm afraid that's a lot harder than it should be, though. You'll hear a lot of excuses along the lines of "I'm in a real hurry, I'll rewrite/refactor this later and then add the tests" -- and of course, the followup never happens because, surprisingly, they're just as busy the next week.
Here's what I would do:
First time out... "we're going to do this project jointly. I'm going to write the tests and you're going to write the code. Pay attention to how I write the tests, coz that's how we do things around here and that's what I'll expect of you."
Following that... "You're done? Great! First let's look at the tests that are driving your development. Oh, no tests? Let me know when that is done and we'll reschedule looking at your code. If you're needing help to formulate the tests let me know and I'll help you."
He's already doing this. Really. He just doesn't write it down. Not convinced? Watch him go through the standard development cycle:
Write a piece of code
Compile it
Run to see what it does
Write the next piece of code
Step #3 is the test. He already does testing, he just does it manually. Ask him this question: "How do you know tomorrow that the code from today still works?" He will answer: "It's such a little amount of code!"
Ask: "How about next week?"
When he hasn't got an answer, ask: "How would you like your program to tell you when a change breaks something that worked yesterday?"
That's what automatic unit testing is all about.
As a junior programmer, I'm still trying to get into the habit of writing tests. Obviously it's not easy to pick up new habits, but thinking about what would make this work for me, I have to +1 the comments about code reviews and coaching/pair programming.
It may also be worth emphasising the long-term purpose of testing: ensuring that what worked yesterday is still working today, and next week, and next month. I only say that because in skimming the answers I didn't see that mentioned.
In doing code reviews (if you decide to do that), make sure your young dev knows it's not about putting him down, it's about making the code better. Because that way his confidence is less likely to get damaged. And that's important. On the other hand, so is knowing how little you know.
Of course, I don't really know anything. But I hope the words have been useful.
Edit: [Justin Standard]
Don't put yourself down, what you have to say is pretty much right on.
On your point about code reviews: what you will find is that not only will the junior dev learn in the process, but so will the reviewers. Everyone in a code review learns if you make it a collaborative process.
Frankly, if you are having to put that much effort into getting him to do something then you may have to come to terms with the thought that he may just not be a good fit for the team, and may need to go. Now, this doesn't necessarily mean firing him... it may mean finding someplace else in the company his skills are more suited. But if there is no place else...you know what to do.
I'm assuming he is also a fairly new hire (< 1 year) and probably recently out of school...in which case he may not be accustomed to how things work in a corporate setting. Things like that most of us could get away with in college.
If this is the case, one thing I've found works is to have a sort of "surprise new hire review." It doesn't matter if you've never done it before...he won't know that. Just sit him down and tell him your are going to go over his performance and show him some real numbers...take your normal review sheet (you do have a formal review process right?) and change the heading if you want so it looks official and show him where he stands. If you show him in a very formal setting that not doing tests is adversely affecting his performance rating as opposed to just "nagging" him about it, he will hopefully get the point. You've got to show him that his actions will actually affect him be it pay wise or otherwise.
I know, you may want to stay away from doing this because it's not official... but I think you are within reason to do it and it's probably going to be a whole lot cheaper than having to fire him and recruit someone new.
As a junior programmer myself, I thought that Id reveal what it was like when I found myself in a similar situation to your junior developer.
When I first came out of uni, I found that it had severly un equipped me to deal with the real world. Yes I knew some JAVA basics and some philosophy (don't ask) but that was about it. When I first got my job it was a little daunting to say the least. Let me tell you I was probably one of the biggest cowboys around, I would hack together a little bug fix / algorithm with no comments / testing / documentation and ship it out the door.
I was lucky enough to be under the supervision of a kind and very patient senior programmer. Luckily for me, he decided to sit down with me and spend 1-2 weeks going through my very hacked togethor code. He would explain where I'd gone wrong, the finer points of c and pointers (boy did that confuse me!). We managed to write a pretty decent class/module in about a week. All I can say is that if the senior dev hadn't invested the time to help me along the right path, I probably wouldn't have lasted very long.
Happily, 2 years down the line, I would hope that some of my collegues might even consider me an average programmer.
Take home points
Most Universities are very bad at preparing students for the real world
Paired programming really helped me. Thats not to say that it will help everyone but it worked for me.
Assign them to projects that don't require "top quality stable code" if that's your concern and let the jr. developer fail. Have them be the one to 'come in on the weekend' to fix their bugs. Have lunch a lot and talk about software development practices (not lectures, but discussions). In time they will acquire and develop the best practices to do the tasks they are assigned.
Who knows, they might even come up with something better than the techniques your team currently uses.
If the junior programmer, or anyone, doesn't see the value in testing, then it will be hard to get them to do it...period.
I would have made the junior programmer sacrifice their weekend to fix the bug. His actions (or lack there of) are not affecting him directly. Also, make it apparent, that he will not see advancement and/or pay increases if he doesn't improve his skills in testing.
In the end, even with all your help, encouragement, mentoring, he might not be a fit for your team, so let him go and look for someone who does get it.
I second RodeoClown's comment about code reviewing every commit. Once he's done it a fair few times he'll get in the habit of testing stuff.
I don't know if you need to block commits like that though.
At my workplace everyone has free commit to everything, and all SVN commit messages (with diffs) are emailed to the team.
Note: you really want the thunderbird colored-diffs addon if you plan on doing this.
My boss or myself (the 2 'senior' coders) will end up reading over the commits, and if there's any stuff like "you forgot to add unit tests" we just flick an email or go and chat to the person, explaining why they needed unit tests or whatever. Everyone else is encouraged to read the commits too, as it's a great way of seeing what's going on, but the junior devs don't comment so much.
You can help encourage people to get into the habit of this by periodically saying things like "Hey, bob, did you see that commit I did this morning, I found this neat trick where you can do blah blah whatever, read the commit and see how it works!"
NB: We have 2 'senior' devs and 3 junior ones. This may not scale, or you might need to adjust the process a bit with more developers.
It's his Mentor's responsibility to Teach him/her. How well are you teaching him/her HOW to test. Are you pair programming with him? The Junior more than likely doesn't know HOW to set up a good test for xyz.
As a Junior freshout of school he knows many Concepts. Some technique. Some experience. But in the end, all a Junior is POTENTIAL. Almost every feature they work on, there will be something new that they have never done before. Sure the Junior may have done a simple State pattern for a project in class, opening and shutting "doors", but never a real world application of the patterns.
He/she will only be as good as how well you teach. If they were able to "Just get it" do you think they would have taken a Junior position in the first place?
In my experience Juniors are hired and given almost same responsibility as Seniors, but are just paid less and then ignored when they start to falter. Forgive me if i seem bitter, it's 'cause i am.
Make code coverage part of the reviews.
Make "write a test that exposes the bug" a prerequisite to fixing a bug.
Require a certain level of coverage before code can be checked in.
Find a good book on test-driven development and use it to show how test-first can speed development.
Lots of psychology and helpful "mentoring" techniques but, in all honestly, this just boils down to "write tests if you want to still have a job, tomorrow."
You can couch it in whatever terms you think are appropriate, harsh or soft, it doesn't matter. But the fact is, programmers are not paid to just throw together code & check it in -- they're paid to carefully put together code, then put together tests, then test their code, THEN check the whole thing in. (At least that's what it sounds like, from your description.)
Hence, if someone is going to refuse to do their job, explain to them that they can stay home, tomorrow, and you'll hire someone who WILL do the job.
Again, you can do all this gently, if you think that's necessary, but a lot of people just need a big hard slap of Life In The Real World, and you'd be doing them a favor by giving it to them.
Good luck.
Change his job description for a while to solely be writing tests and maintaining tests. I've heard that many companies do this for fresh new inexperienced people for a while when they start.
Additionally, issue a challenge while he's in that role: Write tests that will a) fail on current code a) fulfill the requirements of the software. Hopefully it'll cause him to create some solid and thorough tests (improving the project) and make him better at writing tests for when he re-integrates into core development.
edit> fulfull the requirements of the software meaning that he's not just writing tests to purposely break the code when the code never intended or needed to take that test case into account.
If your colleague lacks experience writing tests maybe he or she is having difficulty testing beyond simple situations, and that is manifesting itself as inadequate testing. Here's what I would try:
Spend some time and share testing techniques, like dependency injection, looking for edge cases, and so on with your colleague.
Offer to answer questions about testing.
Do code reviews of tests for a while. Ask your colleague to review changes of yours that are exemplary of good testing. Look at their comments to see if they're really reading and understanding your test code.
If there are books that fit particularly well with your team's testing philosophy give him or her a copy. It might help if your code review feedback or discussions reference the book so he or she has a thread to pick up and follow.
I wouldn't especially emphasize the shame/guilt factor. It is worth pointing out that testing is a widely adopted, good practice and that writing and maintaining good tests is a professional courtesy so your team mates don't need to spend their weekends at work, but I wouldn't belabor those points.
If you really need to "get tough" then institute an impartial system; nobody likes to feel like they're being singled out. For example your team might require code to maintain a certain level of test coverage (able to be gamed, but at least able to be automated); require new code to have tests; require reviewers to consider the quality of tests when doing code reviews; and so on. Instituting that system should come from team consensus. If you moderate the discussion carefully you might uncover other underlying reasons your colleague's testing isn't what you expect.
# jsmorris
I once had the senior developer and "architect" berate me and a tester(it was my first job out of college) in email for not staying late and finishing such an "easy" task the night before. We had been at it all day and called it quits at 7pm, I had been thrashing since 11am before lunch that day and had pestered every member our team for help at least twice.
I responded and cc'd the team with:
"I've been disappointed in you for a month now. I never get help from the team. I'll be at the coffee shop across the street if you need me. I'm sorry i couldn't debug the 12 parameter, 800 line method that just about everything is dependent on."
After cooling off at the coffee shop for an hour, i went back in the office, grabbed my crap and left. After a few days they called me asking if I was coming in, I said I would but I had an interview, maybe tomorrow.
"So your quitting then?"
On your source repository : use hooks before each commits (pre-commit hook for SVN for example)
In that hook, check for the existence of at least one use case for each method. Use a convention for unit test organisation that you could easily enforce via a pre-commit hook.
On an integration server compile everything and check regularely the test coverage using a test coverage tool. If test coverage is not 100% for a code, block any commit of the developer. He should send you the test case that covers 100% of the code.
Only automatic checks can scale well on a project. You cannot check everything by hand.
The developer should have a mean to check if his test cases covers 100% of the code. That way, if he doesn't commit a 100% tested code, it is his own fault, not a "oops, sorry I forgot" fault.
Remember : People never do what you expect, they always do what you inspect.
First off, like most respondents here point out, if the guy doesn't see the value in testing, there's not much you can do about it, and you've already pointed out that you can't fire the guy. However, failure is not an option here, so what about the few things you can do?
If your organization is large enough to have over 6 developers, I strongly recommend having a Quality Assurance department (even if its just one person to start). Ideally, you should have a ratio of 1 tester to 3-5 developers. The thing about programmers is ... they are programmers, not testers. I have yet to interview a programmer that has been formally taught proper QA techniques.
Most organizations make the fatal flaw of assigning the testing roles to the new-hire, the person with the LEAST amount of exposure to your code -- ideally, the senior developers should be moved to the QA role as they have the experience in the code, and (hopefully) have developed a sixth sense for code smells and failure points that can crop up.
Furthermore, the programmer that made the mistake is probably not going to find the defect because its usually not a syntax error (those get picked up in the compile), but a logic error -- and the same logic is at work when they write the test as when they write the code. Don't have the person who developed the code test that code -- they'll find less bugs than anyone else would.
In your case, if you can afford the redirected work effort, make this new guy the first member of your QA team. Get him to read "Software Testing In The Real World: Improving The Process", because he obviously will need some training in his new role. If he doesn't like it, he'll quit and your problem is still solved.
A slightly less vengeful approach would be let this person do what they are good at (I'm assuming this person got hired because they are actually competent at the programming part of the job) , and hire a tester or two to do the testing (University students often have practicum or "co-op" terms, would love the exposure, and are cheap)
Side Note: Eventually, you'll want the QA team reporting to a QA director, or at least not to a software developer manager, because having the QA team report to the manager who's primary goal is to get the product done is a conflict of interest.
If your organization is smaller than 6, or you can't get away with creating a new team, I recommend paired programming (PP). I'm not a total convert of all the extreme programming techniques, but I'm definitely a believer in paired programming. However, both members of the paired programming team have to be dedicated, or it simply doesn't work. They have to follow two rules: the inspector has to fully understand what is being coded on the screen or he has to ask the coder to explain it; the coder can only code what he can explain -- no "you'll see" or "trust me" or hand-waving will be tolerated.
I only recommend PP if your team is capable of it, because, like testing, no amount of cheering or threatening will persuade a couple of ego-filled introverts to work together if they don't feel comfortable doing so. However, I find that between the choice of writing detailed functional specs and doing code reviews vs. paired programming, the PP usually wins out.
If PP is not for you, then TDD is your best bet, but only if its taken literally. Test Driven Development mean you write the tests FIRST, run the tests to prove they actually do fail, then write the simplest code to make it work. The trade off is now you (should) have a collection of thousands of tests, which is also code, and is just as likely as production code to contain bugs. I'll be honest, I'm not a big fan of TDD, mainly because of this reason, but it works for many developers who would rather write test scripts than test case documents -- some testing is better than none. Couple TDD with PP for a better likelihood of test coverage and less bugs in the script.
If all else fails, have the programmers equivalence of a swear jar -- each time the programmer breaks the build, they have to put $20, $50, $100 (whatever is moderately painful for your staff) into a jar that goes to your favorite (registered!) charity. Until they pay up, shun them :)
All joking aside, the best way to get your programmer to write tests is don't let him program. If you want a programmer, hire a programmer -- If you want tests, hire a tester. I started as a junior programmer 12 years ago doing testing, and it turned into my career path, and I wouldn't trade it for anything. A solid QA department that is properly nurtured and given the power and mandate to improve the software is just as valuable as the developers writing the software in the first place.
This may be a bit heartless, but the way you describe the situation it sounds like you need to fire this guy. Or at least make it clear: refusing to follow house development practices (including writing tests) and checking in buggy code that other people have to clean up will eventually get you fired.
The main reason junior engineers/programmers don't take lots of time to design and perform test scripts, is because most CS certifications do not heavily require this, so other areas of engineering are covered further in college programs, such as design patters.
In my experience, the best way to get the junior professionals into the habit, is to make it part of the process explicitly. That is, when estimating the time an iteration should take, the time of design, write and/or execute the cases should be incorporated into this time estimate.
Finally, reviewing the test script design should be part of a design review, and the actual code should be reviewed in the code review. This makes the programmer liable for doing proper testing of each line of code he/she writes, and the senior engineer and peers liable to provide feedback and guidance on the code and test written.
Based on your comment, "Showing that the design becomes simpler" I'm assuming you guys practice TDD. Doing a code review after the fact is not going to work. The whole thing about TDD is that it's a design and not a testing philosophy. If he didn't write the tests as part of the design, you aren't going to get a lot of benefit from writing tests after the fact - especially from a junior developer. He'll end up missing a whole lot of corner cases and his code will still be crappy.
Your best bet is to have a very patient senior developer to sit with him and do some pair programming. And just keep at it until he learns. Or doesn't learn, in which case you need to reassign him to a task he is better suited to because you will just end up frustrating your real developers.
Not everyone has the same level of talent and/or motivation. Development teams, even agile ones, are made up of people on the "A-Team" and people on "B-Team". A-Team members are the one who architect the solution, write all the non-trivial production code, and communicate with the business owners - all the work that requires thinking outside the box. The B-Team handle things like configuration management, writing scripts, fixing lame bugs, and doing maintenance work - all the work that has strict procedures that have small consequences for failure.