Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Just wondering on the pros and cons on TDD/automated unit testing and looking for the community's view on whether it's acceptable for professional developers to write applications without supporting unit tests?
Re-asked on Programmers: https://softwareengineering.stackexchange.com/questions/159572/is-it-acceptable-as-a-professional-developer-not-to-write-unit-tests
I bet I'll get -1 -ed for this, but I still say: if you have other measures to ensure quality, including avoiding regression, program validation, program verification, then no.
The only problem is usually that people don't have any other tools than unit testing to achieve this.
In case you have formally tested models (there's a tool, that actually tests it, or it was constructed in a way which ensures it's valid), and you have formally tested ways to ensure that the actually running software is conform to that model, then it's fine.
Example: if you are sure, that the code you wrote in ruby will act as you'd expect it (because you or someone else tested the ruby interpreter and it doesn't have bugs, or you use only a subset of features known to be safe) then its fine. Usually, we trust C compilers and CPUs in this manner.
Also, if a program is only to be used once,there's no regression problem! If I write a one-liner in bash, which will calculate something for me, I might test it first manually on fake data, then run it on the real one - no need to write an automated test.
If you take the blame, you can also go with along with assumptions: I assume usually, that eclipse is pretty good at creating setters and getters, and I don't test on those. Also, I assume, that in case there'd be any problem with java's Collection classes in Java 7, it'd have turned out by now. But in case there's a trouble, it's your personal trouble. Don't blame anyone.
Personally, I rarely use unit testing on certain codes as I formally test them while they're still flowcharts on a piece of paper, and I ensure that I only use subsets of the language/libraries which are known to work in such situations. Also, I never let code out without peer review. Still, it's sometimes better if there's someone who runs an acceptance test on them...
It is up to you. The question is more philosophical in nature.
Unit tests are just a tool to help you. You can chose to ignore them. However, if you are going to work on a more than trivial project I would advise you to use unit tests.
Yes, they take time, too to write. But in the end you will save a lot if there is any refactoring done or some parts of the code need to be changed.
As always: It depends.
Generally speaking, unit tests are a good thing: they catch a whole class of errors, they verify that particular parts of your code work as expected under given circumstances, and they make it easier to track down errors when something does go wrong. So unless you have good reasons not to, you should write unit tests.
Good reasons not to write unit tests include:
Making relatively small changes to a codebase that is structured badly and hardly testable because of this (usually, the reason is that there is little separation of concerns, and testable units cannot be isolated for testing without intrusive changes to the codebase itself).
The nature of the problem domain makes the code inherently untestable. This is rare, but it happens - for example, it is very hard to come up with meaningful unit tests for a routine that draws a GUI: you'd have to make it render to a mocked surface and then check individual pixels, but you'd also have to mock all the parameters that influence layout decisions, etc.; while this is theoretically possible, it's not usually worth the effort, and one should opt for manual or semi-automatic testing in such cases.
The project is a tiny throwaway program with such a small scope and such a short lifecycle that the benefit gained from unit testing (increased maintainability, decreased complexity) is marginal. Keep in mind, however, that software tends to live longer than it was designed for, and your one-off throwaway script might very well end up becoming a mission-critical part of the company's processes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have now witnessed two companies move to agile development with Scrum.
In both cases the standard of coding was good enough when each part of the application was only being worked on by one or two developers with the developers spending a reasonable amount of time working on one part of the application before moving to the next task. The defect rates were also reasonable.
However with Scrum the developers are expected:
to all be able to work on all the bits of the application.
to only work on one area of the application for a few days at most before moving to the next area
to mostly work on code they did not write
Code qualities became an issue in both of the Scrum projects.
So is there a way to do Scrum that does not lead to these problems, without first getting all the developers to do test driven development?
Have you seen Scrum work well on a large project without test driven development? (If so how?)
I'd like to expand on what Dan said.
It's a very common misconception that Scrum / Agile dictates software engineering principles. This is a fallacy for many reasons. As Dan mentioned, Scrum is a software management process, NOT a software engineering process. That being said, very often you will see many engineering principles associated with Scrum; methodologies such as TDD, XP, etc tend to complement the management methodology that Scrum promotes, but are not required.
The reason that CI, TDD, and other engineering practices are so often found hand-in-hand with Scrum is that in general, many are good practices to follow no matter what management methodology is used.
I'd like to address a couple other fallacies in your OP:
However with Scrum the developers are expected:
* to all be able to work on all the bits of the application.
* to only work on one area of the application for a few days at most before
moving to the next area
* to mostly work on code they did not write
As mentioned above, Scrum doesn't dictate what type or kind of work a developer works on. The developers themselves decide on what work to commit themselves to; if a database-heavy dev wants to only work on the DAL and associated stories, there's no reason that they cannot.
Again, Scrum doesn't dictate anything about how to build the application, so your second point is moot (see point 1).
This is a fallacy, since there is nothing that says a developer should only work on code that isn't theirs, or anything about how a developer should develop. If a developer on a Scrum team finds his/herself only working on others' code, that would be coincidental, not because of the scrum process itself.
See this question/answer for more information on the qualities generally expected in a developer working Scrum.
Yes, Scrum describes the software management approach. The program and project management paradigm should not dictate whether or not you use test driven development.
TDD is a software development practice or technique and although it works well with Scrum I don't think it will make or break your success with the practice.
I have personally seen Scrum work well on medium sized projects without a test driven approach to development. That is not to say we didn't write automated tests, they just were not always written first.
Regarless of the use of Scrum, what you were seeing was a change from a code ownership approach to a communal code approach. In order for that to work, there has to be a process change which supports it. One such possibility is TDD. There are others (Automated unit testing even if doesn't drive design coupled with Code Reviews, strong design communication, greater design up front, not developing on code without first pairing with the original author on the code, and more I'm sure you could think of).
Communal approaches work in smaller communities (in large ones it can degenerate into a tragedy of the commons) with a high sense of cohesion between the members.
We do Scrum at work, but we don't practice TDD. Nothing in the Scrum "guidelines" tells you that you have to use TDD. In fact, most agile practices are merely a recommendation insofar that they have proven to work well in agile environments (or even non-agile ones) without them being a must if you want to implement Scrum.
We do write lots and lots of unit and integration tests to avoid extensive manual testing and ensure that later changes in the code doesn't lead to any unpredicted side effects. But that's not TDD. It's mostly a sensible approach to ensure good code and software quality.
Please note that not implementing TDD was not an "active" decision, it just happened. We are encouraged to "write tests first", e.g. when fixing a bug, so it's kind of a voluntary situation-driven not-obligatory way of getting a feeling for TDD by applying it on a regular basis, but it is not mandatory.
Like others said: Scrum is a framework which can hold whatever practices you want to enforce in your development team. Some agile practices come naturally, because they generally make sense, but which ones you want to use and which ones you don't want is up to you.
Yes, Scrum is entirely possible and in most cases implemented without utilizing a TDD approach.
However, the flexibility that TDD provides is certainly something that a Scrum methodology can benefit from.
The main project I work on uses a Scrum approach but the embedded nature of our project makes test-driven development (the way that most people do it) impractical.
I think the problem you encountered was that the expectations of the programmers changed, not that the management process went to a Scrum-style system. If programmers are constantly being shuffled around to parts of the code they are not familiar with, investigate the process behind how tasks are being delegated relative to the old method. Are tasks being assigned to the developer who knows that area the best or are they going to the developer with the shortest to-do list? Is there a long backlog of to-do items for one part of the code and a scarcity of to-do items for another part? If you want to keep developers focused on the areas they excel in, then project management will want to adjust sprint lengths and task priorities to make sure that the workload can be distributed as desired and still be feasible given the time constraints of the sprint.
The Scrum framework is pretty small, it defines a few meetings, ideal lengths of iteration, responsibilities of product owner, scrum master... and maybe a bit more.
However, once we have started our iteration there is nothing in Scrum that dictates how and when a developer(s) should develop something. There is only the committment it will be 'done' by the end of the sprint.
Scrum is about the team committing to produce results, and the team being empowered to decide how to do so. If that means 1 dev per user story, great. If that means 3 devs per user story thats fine also. Whatever way the scrum team believe to be the best way to do the job is what should happen.
To answer the question, yes Scrum is possible without test driven development.
TDD is a worthwhile/recommended practice but the results will vary depending on team/context. For example was TDD in place at the start of a project or are you trying to inject the methodology in at a later date.
There's some confusion about Scrum here.
Scrum per se doesn't tell you how/when to do technical things like TDD (that's a forever moving target). Scrum tells you how/when to manage the people things that happen on a project. It is a overall project management technique, not a construction management technique.
If your manager wishes to do the three things listed above during your sprints, that's fine, but that isn't part of the framework of Scrum. Those are for construction management, which isn't Scrum. It may be used in your Scrum-framework-bounded project, but it isn't in the official Scrum framework.
I think it's easy to be confused about this, though. Agile techniques like Scrum are usually evangelized by people on the 'net who are all about pushing buzzwords and 'happy shiny' things, and as such don't always understand/communicate well. At least, that's how Agile techniques were introduced to me, by an Agile apologist. It took me a good 6 months before I got past the hype / confusing terminology and figured out what they were talking about.
While there are parts I agree to above, I truly believe that you cannot deliver small increments of Code (Scrum for software) without Testing period.
How do you know your sprint didn't break the last 4? How can you guarantee deliverables if you don't know that you didn't break anything in the past.
While I do agree SCRUM is a management process and that TDD is a software process. You must have some way to be able to verify that you did not move backwards.
SCRUM teaches you that daily deliverables and always moving forward at the expense of speed.
When someone says I want to do CI or Agile or Scrum, to me this automatically means there needs to be unit testing (not integration testing as mentioned above) but rather each individual moving part has it's own unit tested.
If you do an integration test you are not testing each individual part in a self contained way. Therefore all you prove is that the method called in one flow works rather than each possible branch you would see in the MSIL
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Looking at the general trend of comments in my question about Building an Aircraft using Agile, the biggest problem other than cost appears to be safety.
Do people feel that it is not possible to build a safe system (or prove it is safe) using agile? Doesn’t all the iterative testing mitigate this? Is it likely that a piece of software developed using agile will never be as reliable as counterparts such as waterfall?
Agile is a method of managing a project, not a method of testing or verifying the safety of a finished project.
A safety critical system would still need extensive testing after it is complete (functionality wise) to be absolutly sure it is actually up-to-task. I would expect that this sort of work would be given over to a separate team of testers who are specifically focussed on such testing.
Agile is good with soft requirements, where the traditional product lifecycle is long enough for the business goals to have changed, though in a safety-critical environment, I think that rapidly changing requirements or under-specified requirements would be A Very Bad Thing.
I don't buy the idea that using waterfall would in some way give the code some intrinsic order or stability - if the individual sprints are well managed, the code tested and reviewed, then the shorter cycle would produce code of equal quality, just in chunks.
Using Scrum gives you a heads-up earlier in the project timeline when things are getting problematic - it's not going to do anything but remove hiding places for poor performing managers / devs / whoever.
In short, it is possible to build any sort of system using Agile methods, just so long as you don't expect it to test what you have built. Thats for the testers.
There are a number of high-profile software failures that illuminate this issue. In particular, Ariane 5 Flight 501 and the Therac-25 are both examples of software failures that bring this problem into sharp relief. The Ariane 5 rocket veered off its flight path 37 seconds after launch due to an integer overflow in the guidance software. The accident cost $370 million in lost equipment, but there was no loss of life. The same cannot be said of the Therac-25, a medical machine that killed several people with lethal doses of radiation.
Could these problems have been prevented with a better software methodology? I'm not so sure. The management decisions that contributed to the failure of the Ariane 5 had little to do with the manner in which the software was constructed, and the Therac-25 investigation was hampered by the belief that it was not possible for the machine to fail.
Better testing methodologies could have helped. It's hard to believe that a good statically typed compiler would have failed to find the integer overflow. New testing methodologies like Pex, with its built-in theorem prover, have the capability to search for corner cases, and could have identified the sensor anomalies that existed in the Therac-25.
But no technique is reliable unless you have an uncompromising commitment to safety, from the very highest levels of management all the way down to the people who box the product for shipment.
The WHOLE point about safety critical systems that everyone seems to be missing here is that because of their potential to cause loss of life (sometimes on a large scale), they have to be proven to be correct. Often they require an operating certificate unless a licensing authority that is satisfied that the system's requirements have been accurately specified, (often using formal methods like Z and VDM), that the design is a reflection of those requirements (and can be proven to be such) and that the code is an accurate reflection of the design (again proven to be so).
More often than not, the option to test the product to provide such proof doesn't exist (OK guys let's go life with the nuclear reactor, Boeing 777, Therac 25 - whatever and see where the bugs are). Safety critical systems (especially those categorised at S.I.L 3 and above) need thorough and comprehensive documentation which is totally at odds with the Agile manifesto in all respects as far as I can see. Programmers of safety critical systems are not even allowed to make changes to the relesed software without requesting a new revaluation of the software from the licensing authority and rightly so, given the rigour that goes into proving the correctness of the first version and the catestrophic implications for a screw up.
Agile is a dynamic development model - You use it when requirements of your application are to be changed fast and unforeseen. Also if the number of your developers are still countable.
You do NOT use it just because it is modern/in/hip/cool.
Sure you can find errors with unit tests, but they never prove their absence. Changing/Adding requirements of the application during development greatly involves adding hidden errors.
For exactly planned applications which is typical for safety critical applications you want to use more static development models like waterfall or v-model.
Most safety critical or mission critical systems can benefit from a more standard development structure such as the waterfall model and formal code reviews. Such methods help maintain a more structured code base. Great Book about software construction - especially if the project has already begun using an Agile process - Code Complete 2 ed.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I've increasingly absorbed Agile thinking into the way I work, yagni ("you aren't going to need it") seems to become more and more important. It seems to me to be one of the most effective rules for filtering out misguided priorities and deciding what not to work on next.
Yet yagni seems to be a concept that is barely whispered about here at SO. I ran the obligatory search, and it only shows up in one question title - and then in a secondary role.
Why is this? Am I overestimating its importance?
Disclaimer. To preempt the responses I'm sure I'll get in objection, let me emphasize that yagni is the opposite of quick-and-dirty. It encourages you to focus your precious time and effort on getting the parts you DO need right.
Here are some off-the-top ongoing questions one might ask.
Are my Unit Tests selected based on user requirements, or framework structure?
Am I installing (and testing and maintaining) Unit Tests that are only there because they fall out of the framework?
How much of the code generated by my framework have I never looked at (but still might bite me one day, even though yagni)?
How much time am I spending working on my tools rather than the user's problem?
When pair-programming, the observer's role value often lies in "yagni".
Do you use a CRUD tool? Does it allow (nay, encourage) you to use it as an _RU_ tool, or a C__D tool, or are you creating four pieces of code (plus four unit tests) when you only need one or two?
TDD has subsumed YAGNI in a way. If you do TDD properly, that is, only write those tests that result in required functionality, then develop the simplest code to pass the test, then you are following the YAGNI principle by default. In my experience, it is only when I get outside the TDD box and start writing code before tests, tests for things that I don't really need, or code that is more than the simplest possible way to pass the test that I violate YAGNI.
In my experience the latter is my most common faux pas when doing TDD -- I tend to jump ahead and start writing code to pass the next test. That often results in me compromising the remaining tests by having a preconceived idea based on my code rather than the requirements of what needs to be tested.
YMMV.
Yagni and KISS (keep it simple, stupid) are essentially the same principle. Unfortunately, I see KISS mentioned about as often as I see "yagni".
In my part of the wilderness, the most common cause of project delays and failures is poor execution of unnecessary components, so I agree with your basic sentiment.
The freedom to change drives YAGNI. In a waterfall project, the mantra is control scope. Scope is controlled by establishing a contract with the customer. Consequently, the customer stuffs all they can think of in the scope document knowing that changes to scope will be difficult once the contract has been signed. As a result, you end up with applications that has a laundry list of features, not a set of features that have value.
With an agile project, the product owner builds a prioritized product backlog. The development team builds features based on priority i.e., value. As a result, the most important stuff get built first. You end up with an application that has features that are valued by the users. The stuff that is not important falls off the list or doesn't get done. That is YAGNI.
While YAGNI is not a practice, it is a result of the prioritized backlog list. The business partner values the flexibility afforded the business given that they can change and reprioritized the product backlog from iteration to iteration. It is enough to explain that YAGNI is the benefit gained when we readily accept change, even late in the process.
The problem I find is that people tend to bucket even writing factories, using DI containers (unless you've already have that in your codebase) under YAGNI. I agree with JB King there. For many people I've worked with YAGNI seems to be the license to cut corners / to write sloppy code.
For example, I was writing a PinPad API for abstracting multiple models/manufacturers' PINPad. I found unless I've the overall structure, I can't write even my Unit Tests. May be I'm not a very seasoned practioner of TDD. I'm sure there'll be differing opinions on whether what I did is YAGNI or not.
I have seen a lot of posts on SO referencing premature optimization which is a form of yagni, or at least ydniy (you don't need it yet).
I don't see YAGNI as the opposite of quick-and-dirty, really. It is doing just what is needed and no more and not planning like the software someone writes has to last 50 years. It may come rarely because there aren't really that many questions to ask around it, at least to my mind. Similar to the "don't repeat yourself" and "keep it simple, stupid" rules that become common but aren't necessarily dissected and analyzed in 101 ways. Some things are simple enough that it is usually gotten soon after doing a little practice. Some things get developed behind the scenes and if you turn around and look you may notice them may be another way to state things.