Is Scrum possible without test driven development? [closed] - unit-testing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have now witnessed two companies move to agile development with Scrum.
In both cases the standard of coding was good enough when each part of the application was only being worked on by one or two developers with the developers spending a reasonable amount of time working on one part of the application before moving to the next task. The defect rates were also reasonable.
However with Scrum the developers are expected:
to all be able to work on all the bits of the application.
to only work on one area of the application for a few days at most before moving to the next area
to mostly work on code they did not write
Code qualities became an issue in both of the Scrum projects.
So is there a way to do Scrum that does not lead to these problems, without first getting all the developers to do test driven development?
Have you seen Scrum work well on a large project without test driven development? (If so how?)

I'd like to expand on what Dan said.
It's a very common misconception that Scrum / Agile dictates software engineering principles. This is a fallacy for many reasons. As Dan mentioned, Scrum is a software management process, NOT a software engineering process. That being said, very often you will see many engineering principles associated with Scrum; methodologies such as TDD, XP, etc tend to complement the management methodology that Scrum promotes, but are not required.
The reason that CI, TDD, and other engineering practices are so often found hand-in-hand with Scrum is that in general, many are good practices to follow no matter what management methodology is used.
I'd like to address a couple other fallacies in your OP:
However with Scrum the developers are expected:
* to all be able to work on all the bits of the application.
* to only work on one area of the application for a few days at most before
moving to the next area
* to mostly work on code they did not write
As mentioned above, Scrum doesn't dictate what type or kind of work a developer works on. The developers themselves decide on what work to commit themselves to; if a database-heavy dev wants to only work on the DAL and associated stories, there's no reason that they cannot.
Again, Scrum doesn't dictate anything about how to build the application, so your second point is moot (see point 1).
This is a fallacy, since there is nothing that says a developer should only work on code that isn't theirs, or anything about how a developer should develop. If a developer on a Scrum team finds his/herself only working on others' code, that would be coincidental, not because of the scrum process itself.
See this question/answer for more information on the qualities generally expected in a developer working Scrum.

Yes, Scrum describes the software management approach. The program and project management paradigm should not dictate whether or not you use test driven development.
TDD is a software development practice or technique and although it works well with Scrum I don't think it will make or break your success with the practice.
I have personally seen Scrum work well on medium sized projects without a test driven approach to development. That is not to say we didn't write automated tests, they just were not always written first.

Regarless of the use of Scrum, what you were seeing was a change from a code ownership approach to a communal code approach. In order for that to work, there has to be a process change which supports it. One such possibility is TDD. There are others (Automated unit testing even if doesn't drive design coupled with Code Reviews, strong design communication, greater design up front, not developing on code without first pairing with the original author on the code, and more I'm sure you could think of).
Communal approaches work in smaller communities (in large ones it can degenerate into a tragedy of the commons) with a high sense of cohesion between the members.

We do Scrum at work, but we don't practice TDD. Nothing in the Scrum "guidelines" tells you that you have to use TDD. In fact, most agile practices are merely a recommendation insofar that they have proven to work well in agile environments (or even non-agile ones) without them being a must if you want to implement Scrum.
We do write lots and lots of unit and integration tests to avoid extensive manual testing and ensure that later changes in the code doesn't lead to any unpredicted side effects. But that's not TDD. It's mostly a sensible approach to ensure good code and software quality.
Please note that not implementing TDD was not an "active" decision, it just happened. We are encouraged to "write tests first", e.g. when fixing a bug, so it's kind of a voluntary situation-driven not-obligatory way of getting a feeling for TDD by applying it on a regular basis, but it is not mandatory.
Like others said: Scrum is a framework which can hold whatever practices you want to enforce in your development team. Some agile practices come naturally, because they generally make sense, but which ones you want to use and which ones you don't want is up to you.

Yes, Scrum is entirely possible and in most cases implemented without utilizing a TDD approach.
However, the flexibility that TDD provides is certainly something that a Scrum methodology can benefit from.

The main project I work on uses a Scrum approach but the embedded nature of our project makes test-driven development (the way that most people do it) impractical.
I think the problem you encountered was that the expectations of the programmers changed, not that the management process went to a Scrum-style system. If programmers are constantly being shuffled around to parts of the code they are not familiar with, investigate the process behind how tasks are being delegated relative to the old method. Are tasks being assigned to the developer who knows that area the best or are they going to the developer with the shortest to-do list? Is there a long backlog of to-do items for one part of the code and a scarcity of to-do items for another part? If you want to keep developers focused on the areas they excel in, then project management will want to adjust sprint lengths and task priorities to make sure that the workload can be distributed as desired and still be feasible given the time constraints of the sprint.

The Scrum framework is pretty small, it defines a few meetings, ideal lengths of iteration, responsibilities of product owner, scrum master... and maybe a bit more.
However, once we have started our iteration there is nothing in Scrum that dictates how and when a developer(s) should develop something. There is only the committment it will be 'done' by the end of the sprint.
Scrum is about the team committing to produce results, and the team being empowered to decide how to do so. If that means 1 dev per user story, great. If that means 3 devs per user story thats fine also. Whatever way the scrum team believe to be the best way to do the job is what should happen.
To answer the question, yes Scrum is possible without test driven development.
TDD is a worthwhile/recommended practice but the results will vary depending on team/context. For example was TDD in place at the start of a project or are you trying to inject the methodology in at a later date.

There's some confusion about Scrum here.
Scrum per se doesn't tell you how/when to do technical things like TDD (that's a forever moving target). Scrum tells you how/when to manage the people things that happen on a project. It is a overall project management technique, not a construction management technique.
If your manager wishes to do the three things listed above during your sprints, that's fine, but that isn't part of the framework of Scrum. Those are for construction management, which isn't Scrum. It may be used in your Scrum-framework-bounded project, but it isn't in the official Scrum framework.
I think it's easy to be confused about this, though. Agile techniques like Scrum are usually evangelized by people on the 'net who are all about pushing buzzwords and 'happy shiny' things, and as such don't always understand/communicate well. At least, that's how Agile techniques were introduced to me, by an Agile apologist. It took me a good 6 months before I got past the hype / confusing terminology and figured out what they were talking about.

While there are parts I agree to above, I truly believe that you cannot deliver small increments of Code (Scrum for software) without Testing period.
How do you know your sprint didn't break the last 4? How can you guarantee deliverables if you don't know that you didn't break anything in the past.
While I do agree SCRUM is a management process and that TDD is a software process. You must have some way to be able to verify that you did not move backwards.
SCRUM teaches you that daily deliverables and always moving forward at the expense of speed.
When someone says I want to do CI or Agile or Scrum, to me this automatically means there needs to be unit testing (not integration testing as mentioned above) but rather each individual moving part has it's own unit tested.
If you do an integration test you are not testing each individual part in a self contained way. Therefore all you prove is that the method called in one flow works rather than each possible branch you would see in the MSIL

Related

Forcing Unit Testing on Developers

First a little background. The company I work for writes web based software that is a hosted solution for our customers (ie ASP (Application Service Provider)). We are adopting agile practices such as Scrum and we execute sprints to build new features for our product.
I am a proponent of TDD (Test Driven Design), and as a part of what I deliver in a sprint I always write tests and I always get them integrated with the build (ie ccnet); however the other developers do not follow this practice and it is not enforced.
Is it a good practice to force a development group into providing unit tests as a part of what is delivered in a sprint?
Unless you are a position of authority, the best thing you can do is to convince them of the value of the test suite.
It's very difficult to get developers to see the light on this issue if they aren't seeing it done right.
Try to pair with another developer and show them the benefits and the clarity that comes from writing the tests FIRST. If you don't do this, they are likely to write all of their code, get it working, and then write tests. So, from their point of view, it will feel like simply an extra task that doesn't help them get things done.
Also keep in mind that people often do not understand how to write good tests. Even more, some do not know how to make use of tools like jmock, which can lead to them getting stuck and giving up on writing a test.
Forcing anything onto anybody is not a good practise in my view. I would show them the benefits of TDD at every opportunity I get. This should automatically get the rest of the team to voluntarily practise TDD.
You don't want lip-service unit testing, you want whole-hearted unit testing. That isn't something that can be forced. What you need to do is influence your teammates over time to see the benefits of unit testing and to develop a unit testing culture.
To start with you need to understand that different people change for different reasons. In Crossing the Chasm terms, visionaries will adopt new techniques because they are better, but pragmatists adopt new techniques either because they solve a problem/pain the currently have or because everyone else is adopting it.
Your mission then it to show how unit testing can solve a pain your team currently feels. As you win over people one-by-one eventually you can reach a tipping point where unit testing is the norm and everyone goes along with it. However if you can't tie unit testing to a pain your team feels then your efforts to convince them will likely fail.
Considering unit-testing improves code and software quality on a long term, I would say that, yes, it is good practice to have your developpers provide unit-tests -- be it part of some kind of sprint or not.
The two main barriers to unit-tests I've seen are :
the developpers don't get the point : "why would we write more code just to test ?"
"we don't have time to write unit-tests"
To answer the first point, you'll have to provide some sort of demonstration / formation, I suppose ; if you can get developpers to see why unit-tests are useful, they will like them, and use/develop them ; but they need to seen why those are useful : unless you are their boss, you cannot force people to develop unit-tests.
And, even if you are their boss, they will probably not do the best possible job if they are being forced : unit-testing is often done better if people understand why and how !
To answer the second point... Well, you obviously need to get your developpers some "special" time to developp unit-tests ; it can mean less time to do manual testing, btw.
Another thing is : it is hard to know "what to test", and "how to test" : you will need to explain / demonstrate that to your colleagues : some things cannot be tested, some things don't need to be, and some things are not "unit-testable" -- well, I suppose, unless you software is really well engineered ^^
I've gotten in that position on many jobs and contracts in the past, so I've finally gotten discouraged and embraced the darkness by advocating Development Driven Development.
I've found that when most managers embrace XP, they're embracing throwing out the documentation, not really doing TDD. Programmers on most teams are rewarded for quick hacks that get the defect out of their queue, and it's one manager in about ten who has the guts to stand up to senior management as an advocate of the overall quality of the product as opposed to the bottom-line-for-this-quarter way of doing business. After all, most software jobs that pay anything are corporate sponsored, and Freud proved that corporations are insane.
Or at least, he should have.
There is a great Joel on Software article on this called Getting Things Done When You're Only a Grunt. His strategies applied to united testing would be the following:
Just Do It
Most important, I think. Regularly write tests, do not make it appear as if you yourself see them as a minor part of development.
Harness the Power of Viral Marketing
If you see an error in someone else's code, write a test that triggers it and present both to him. Maybe he sees your point.
Create a Pocket of Excellence
Identify the team members who are open to the idea but not quite sure how to work with unit tests and set up a number of test classes until everyone of them knows how to do it. Then turn towards the other ones. Things are a lot easier to establish once you are not the only one anymore.
Neutralize The Bozos
There will be team members who are almost impossible to get to write tests. Maybe those should be dealt with by regularly breaking their code with a new commit - and then pointing out that since they don't have tests it was hard for you to notice.
it depends on your Version Control, but there are Version Control Software that enable you to run scripts before check-in or before merge to the Production branch. and it will not let the developer check in if the unit testing is failed.
First, talk to your manager. If he is convinced that testing is a good thing, add a coverage test to your build system. If the coverage of your unit tests falls below a certain level, you can handle it as a failed build. This gives your colleagues a measure, a way to see when the fail to deliver tested code.
In your case, NCover seems to integrate nicely into CC.NET.
Create a matrix that you want developers to achieve.
Make sure there is a subtask for Unit test creation for every story.
No. In my experience, TDD isn't all that useful in practice. I sometimes use it for really general fundamental classes (like geometry or generic data structures) that lend themselves naturally to automated tests. But for UI components or business logic, I find it's more trouble than it's worth.

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

Getting started in Unit Testing as a group in these economic times

We have a group of a few developers and some business analysts. We as developers would like to start adding unit testing as part of our coding practices so that we can deliver maintainable and extensible code, especially since we will also be the ones supporting and enhancing the application in the future. But in this economic downturn we are struggling with the push to get started because we are challenged to just deliver solutions as fast as possible, with quality not being the top priority. What can we do or say to show that we will be able to deliver faster and with higher quality, as well as preparing for future enhancements.
Basically we just need to get over the learning curve of incorporating unit testing into our daily work, but we cannot do that now because it is viewed as an unnecessary overhead that would delay our projects that the business needs now.
We as developers want to provide the highest value to the business, especially quickly, but we know that we will also need to do this 6 months from now and we need to plan for that as well, and we believe that unit testing will help us greatly down the line.
EDIT
All awesome input, thank you. I personally know how to write unit test, but I don't have the experience in me to say whether or not that unit test is good. I have just ordered Test Driven Development: By Example and will take the initiative to get the ball rolling on incorporating unit testing in our group.
You need to just start doing it, with or without permission. In the end it will make you more productive and increase your code quality. You can start small by including units for something critical and once you've shown the benefits, you're in.
Start unit testing the functions or classes when you create them. Begin with simple classes/functions that do not have external dependencies (DB, file system).
Share your progress inside the team. Count the number of tests and display a big chart showing your progress (unless the management/analysts are very hostile against unit testing).
Read about TDD: "Test-Driven Development : by example". Writing the tests first leads to code that can be easily tested. If you write the production code first, you may have hard time putting it under tests.
I would like to recommend the book
Pragmatic Software Testing
The book Pragmatic Unit Testing is also a good book, although it focuses more on C# and NUnit, there are topics which are applicable language independent
Just do it...
If it's part of your personal process for any new code, you'll at least have that covered. You'll probably never get permission to add unit tests to cover all your code ever but you can at least be sure that no future changes undo work from a given point in time.
Think of it from the business side, why do you need to write code to prove the code you've already written is correct? Why is it wrong?
Getting 100% coverage would be nice but take your time getting there and don't write tests for existing code that isn't currently wrong; write the tests as it breaks so you at least never undo something unintentionally.
You don't even need to discuss it with management (though the situation you describe about it is far from ideal). It's like Design by Contract - I introduced it in previous code I worked in, just as part of the development process. Once it's in, and it works, trust me, noone will dare to remove it.
Unit testing can also be seen as a part of development. Hence, if you have "20 days" for developing features A, B and C, you can typically include unit tests in your estimations for development itself.
Making unit tests is surprisingly easy. Compared with multithreading problems or any sort of complex design, unit testing is very easy to grasp for any competent developer.
You can read good literature about it (you have dozens of tutorials online) in, say, half a day, and start doing your first tests in the afternoon.
Really - just do it!
I know this may not be readily available to you, but I believe getting someone who's test infected on your team to show you the way is the easiest way to maintain productivity when introducing testing. Your people obviously need to know the concepts, which they can do by reading books or attending user groups. A test infected developer can show you how to do it in your practical everyday work with an absolute minimal loss in productivity whilst doing so. It wouldn't surprise me if your speed increased immediately, but I wouldn't make any claims of that sort. With test experience also comes knowledge of testable designs, which I think is key.

YAGNI - The Agile practice that must not be named? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I've increasingly absorbed Agile thinking into the way I work, yagni ("you aren't going to need it") seems to become more and more important. It seems to me to be one of the most effective rules for filtering out misguided priorities and deciding what not to work on next.
Yet yagni seems to be a concept that is barely whispered about here at SO. I ran the obligatory search, and it only shows up in one question title - and then in a secondary role.
Why is this? Am I overestimating its importance?
Disclaimer. To preempt the responses I'm sure I'll get in objection, let me emphasize that yagni is the opposite of quick-and-dirty. It encourages you to focus your precious time and effort on getting the parts you DO need right.
Here are some off-the-top ongoing questions one might ask.
Are my Unit Tests selected based on user requirements, or framework structure?
Am I installing (and testing and maintaining) Unit Tests that are only there because they fall out of the framework?
How much of the code generated by my framework have I never looked at (but still might bite me one day, even though yagni)?
How much time am I spending working on my tools rather than the user's problem?
When pair-programming, the observer's role value often lies in "yagni".
Do you use a CRUD tool? Does it allow (nay, encourage) you to use it as an _RU_ tool, or a C__D tool, or are you creating four pieces of code (plus four unit tests) when you only need one or two?
TDD has subsumed YAGNI in a way. If you do TDD properly, that is, only write those tests that result in required functionality, then develop the simplest code to pass the test, then you are following the YAGNI principle by default. In my experience, it is only when I get outside the TDD box and start writing code before tests, tests for things that I don't really need, or code that is more than the simplest possible way to pass the test that I violate YAGNI.
In my experience the latter is my most common faux pas when doing TDD -- I tend to jump ahead and start writing code to pass the next test. That often results in me compromising the remaining tests by having a preconceived idea based on my code rather than the requirements of what needs to be tested.
YMMV.
Yagni and KISS (keep it simple, stupid) are essentially the same principle. Unfortunately, I see KISS mentioned about as often as I see "yagni".
In my part of the wilderness, the most common cause of project delays and failures is poor execution of unnecessary components, so I agree with your basic sentiment.
The freedom to change drives YAGNI. In a waterfall project, the mantra is control scope. Scope is controlled by establishing a contract with the customer. Consequently, the customer stuffs all they can think of in the scope document knowing that changes to scope will be difficult once the contract has been signed. As a result, you end up with applications that has a laundry list of features, not a set of features that have value.
With an agile project, the product owner builds a prioritized product backlog. The development team builds features based on priority i.e., value. As a result, the most important stuff get built first. You end up with an application that has features that are valued by the users. The stuff that is not important falls off the list or doesn't get done. That is YAGNI.
While YAGNI is not a practice, it is a result of the prioritized backlog list. The business partner values the flexibility afforded the business given that they can change and reprioritized the product backlog from iteration to iteration. It is enough to explain that YAGNI is the benefit gained when we readily accept change, even late in the process.
The problem I find is that people tend to bucket even writing factories, using DI containers (unless you've already have that in your codebase) under YAGNI. I agree with JB King there. For many people I've worked with YAGNI seems to be the license to cut corners / to write sloppy code.
For example, I was writing a PinPad API for abstracting multiple models/manufacturers' PINPad. I found unless I've the overall structure, I can't write even my Unit Tests. May be I'm not a very seasoned practioner of TDD. I'm sure there'll be differing opinions on whether what I did is YAGNI or not.
I have seen a lot of posts on SO referencing premature optimization which is a form of yagni, or at least ydniy (you don't need it yet).
I don't see YAGNI as the opposite of quick-and-dirty, really. It is doing just what is needed and no more and not planning like the software someone writes has to last 50 years. It may come rarely because there aren't really that many questions to ask around it, at least to my mind. Similar to the "don't repeat yourself" and "keep it simple, stupid" rules that become common but aren't necessarily dissected and analyzed in 101 ways. Some things are simple enough that it is usually gotten soon after doing a little practice. Some things get developed behind the scenes and if you turn around and look you may notice them may be another way to state things.

What are the primary differences between TDD and BDD? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Test Driven Development has been the rage in the .NET community for the last few years. Recently, I have heard grumblings in the ALT.NET community about BDD. What is it? What makes it different from TDD?
I understand BDD to be more about specification than testing. It is linked to Domain Driven Design (don't you love these *DD acronyms?).
It is linked with a certain way to write user stories, including high-level tests. An example by Tom ten Thij:
Story: User logging in
As a user
I want to login with my details
So that I can get access to the site
Scenario: User uses wrong password
Given a username 'jdoe'
And a password 'letmein'
When the user logs in with username and password
Then the login form should be shown again
(In his article, Tom goes on to directly execute this test specification in Ruby.)
The pope of BDD is Dan North. You'll find a great introduction in his Introducing BDD article.
You will find a comparison of BDD and TDD in this video. Also an opinion about BDD as "TDD done right" by Jeremy D. Miller
March 25, 2013 update
The video above has been missing for a while. Here is a recent one by Llewellyn Falco, BDD vs TDD (explained). I find his explanation clear and to the point.
To me primary difference between BDD and TDD is focus and wording. And words are important for communicating your intent.
TDD directs focus on testing. And since in "old waterfall world" tests come after implementation, then this mindset leads to wrong understanding and behaviour.
BDD directs focus on behaviour and specification, and so waterfall minds are distracted. So BDD is more easily understood as design practice and not as testing practice.
There seem to be two types of BDD.
The first is the original style that Dan North discusses and which caused the creation of the xBehave style frameworks. To me this style is primarily applicable for acceptance testing or specifications against domain objects.
The second style is what Dave Astels popularised and which, to me, is a new form of TDD which has some serious benefits. It focuses on behavior rather than testing and also small test classes, trying to get to the point where you basically have one line per specification (test) method. This style suits all levels of testing and can be done using any existing unit testing framework though newer frameworks (xSpec style) help focus one the behavior rather than testing.
There is also a BDD group which you might find useful:
http://groups.google.com/group/behaviordrivendevelopment/
Test-Driven Development is a test-first software development methodology, which means that it requires writing test code before writing the actual code that will be tested. In Kent Beck’s words:
The style here is to write a few lines of code, then a test that
should run, or even better, to write a test that won't run, then write
the code that will make it run.
After figuring out how to write one small piece of code, now, instead of just coding on, we want to get immediate feedback and practice "code a little, test a little, code a little, test a little." So we immediately write a test for it.
So TDD is a low-level, technical methodology that programmers use to produce clean code that works.
Behaviour-Driven Development is a methodology that was created based on TDD, but evolved into a process that doesn’t concern only programmers and testers, but instead deals with the entire team and all important stakeholders, technical and non-technical. BDD started out of a few simple questions that TDD doesn’t answer well: how much tests should I write? What should I actually test—and what shouldn’t I? Which of the tests I write will be in fact important to the business or to the overall quality of the product, and which are just my over-engineering?
As you can see, such questions require collaboration between technology and business. Business stakeholders and domain experts often can tell engineers what kind of tests sound like they would be useful—but only if the tests are high-level tests that deal with important business aspects. BDD calls such business-like tests “examples,” as in “tell me an example of how this feature should behave correctly,” and reserves the word “test” for low-level, technical checks such as data validation or testing API integrations. The important part is that while tests can only be created by programmers and testers, examples can be collected and analysed by the entire delivery team—by designers, analysts, and so on.
In a sentence, one of the best definitions of BDD I have found so far is that BDD is about “having conversations with domain experts and using examples to gain a shared understanding of the desired behaviour and discover unknowns.” The discovery part is very important. As the delivery team collects more examples, they start to understand the business domain more and more and thus they reduce their uncertainty about some aspects of the product they have to deal with. As uncertainty decreases, creativity and autonomy of the delivery team increase. For instance, they can now start suggesting their own examples that the business users didn’t think were possible because of their lack of tech expertise.
Now, having conversations with the business and domain experts sounds great, but we all know how that often ends up in practice. I started my journey with tech as a programmer. As programmers, we are taught to write code—algorithms, design patterns, abstractions. Or, if you are a designer, you are taught to design—organize information and create beautiful interfaces. But when we get our entry-level jobs, our employers expect us to "deliver value to the clients." And among those clients can be, for example... a bank. But I could know next to nothing about banking—except how to efficiently decrease my account balance. So I would have to somehow translate what is expected of me into code... I would have to build a bridge between banking and my technical expertise if I want to deliver any value. BDD helps me build such a bridge on a stable foundation of fluid communication between the delivery team and the domain experts.
Learn more
If you want to read more about BDD, I wrote a book on the subject. “Writing Great Specifications” explores the art of analysing requirements and will help you learn how to build a great BDD process and use examples as a core part of that process. The book talks about the ubiquitous language, collecting examples, and creating so-called executable specifications (automated tests) out of the examples—techniques that help BDD teams deliver great software on time and on budget.
If you are interested in buying “Writing Great Specifications,” you can save 39% with the promo code 39nicieja2 :)
I have experimented a little with the BDD approach and my premature conclusion is that BDD is well suited to use case implementation, but not on the underlying details. TDD still rock on that level.
BDD is also used as a communication tool. The goal is to write executable specifications which can be understood by the domain experts.
With my latest knowledge in BDD when compared to TDD, BDD focuses on specifying what will happen next, whereas TDD focuses on setting up a set of conditions and then looking at the output.
Behaviour Driven Development seems to focus more on the interaction and communication between Developers and also between Developers and testers.
The Wikipedia Article has an explanation:
Behavior-driven development
Not practicing BDD myself though.
Consider the primary benefit of TDD to be design. It should be called Test Driven Design. BDD is a subset of TDD, call it Behaviour Driven Design.
Now consider a popular implementation of TDD - Unit Testing. The Units in Unit Testing are typically one bit of logic that is the smallest unit of work you can make.
When you put those Units together in a functional way to describe the desired Behaviour to the machines, you need to understand the Behaviour you are describing to the machine. Behaviour Driven Design focuses on verifying the implementers' understanding of the Use Cases/Requirements/Whatever and verifies the implementation of each feature. BDD and TDD in general serves the important purpose of informing design and the second purpose of verifying the correctness of the implementation especially when it changes. BDD done right involves biz and dev (and qa), whereas Unit Testing (possibly incorrectly viewed as TDD rather than one type of TDD) is typically done in the dev silo.
I would add that BDD tests serve as living requirements.
It seems to me that BDD is a broader scope. It almost implies TDD is used, that BDD is the encompassing methodology that gathers the information and requirements for using, among other things, TDD practices to ensure rapid feedback.
In short there is major difference between TDD and BDD
In TDD we are majorly focused on Test data
In BDD our main focus is on behavior of the project so that any non - programming person can understand the line of code on the behalf of the title of that method
There is no difference between TDD and BDD. except you can read your tests better, and you can use them as requirements. If you write your requirements with the same words as you write BDD tests then you can come from your client with some of your tests defined ready to write code.
Here's the quick snapshot:
TDD is just the process of testing code before writing it!
DDD is the process of being informed about the Domain before each cycle of touching code!
BDD is an implementation of TDD which brings in some aspects of DDD!