Is BDD a replacement of TDD? - unit-testing

I wonder whether BDD is a replacement of TDD? What I understand now is that in an ultimate BDD we don't have unit tests any more. Instead there are stories/scenarios/features and "test steps". And it looks like a complete replacement of TDD for me. TDD is dead?

Not at all. BDD is just a variant of TDD.
In TDD, you formulate your requirements as an executable test, then write the production code to fulfill the test. BDD does nothing but re-formulate these requirements into a more human-readable form and thus makes the tests somewhat more verbose to a human reader who looks at the test report. (Btw: To achieve this, BDD requires way more code than traditional data-driven unit testing...)
That's all.
Thomas

I have a different viewpoint on this than other responders.
Dan North created BDD druing his consulting work on TDD when he saw, that many people were confused by the "test" part, because they had testing experience, he decided to change the name. So at first, BDD was exactly what TDD is, explained correctly.
After that Dan started to extend the idea of using executable specifications (unit tests) to drive the implementations by adding another levels of specification. He was inspired by user stories, so the simplest BDD implemented by most tools lets you write requirements as user story scenarios, than you write code which generates unit tests and than from those unit tests you work on implementation. So now you see compared to TDD there is another level of specification - user stories. Many tools include prepared translations of user stories to tests, so many forget about them as you did, but they are still there and cannot be fully omitted - practically and also theoretically as noted, programming user stories is not efficient. But that is not the point, you use user stories to gather requirements from stakeholders and to proove you implemented them by executing acceptance tests.
There are many other small things in BDD, you better read Dans blog to understand them, but the main point is that BDD is an extension of TDD even outside the implementation phase, so they cannot be interchanged or rendered useless by each other.

Gabriel is almost right.
The fundamental difference at a unit level is that BDD uses the word "should" instead of "test". It turns out that when you say "test", most people start thinking about what their code does, and how they can test it. With BDD, we consider - and question - what our code should do. It's a subtle but important point, and if you want to know why that's powerful, go read up on Neuro-linguistic Programming - particularly around the way in which words affect thoughts and the model of the world. As a brief example, many people who are new to TDD start pinning their code down so that nobody can break it. BDDers tend to provide examples which demonstrate the value of their code so that people can change their code safely.
Dan realised while he was talking with Chris Matts and writing JBehave that he could take this up to a scenario level (scenarios aren't quite the same as stories). Because we were already using "should" at a unit level, it made sense to start writing things in English (I tend to use "should give me" rather than "should return", for instance). Acceptance Test Driven Development - ATDD - has been around for a long time, but this was AFAIK the first time anyone had written them in English with business stakeholders involved.
It's more than just a replacement for TDD. It's a different way of thinking about testing - very much focused on learning, deliberately discovering areas where we perhaps thought we knew what we were doing but didn't, uncovering and helping us to resolve ignorance and misunderstanding. It works at many levels. Chris Matts' Feature Injection takes this into the higher level space, right the way up to project visioning.
We still do write examples - or specifications if you like - at a unit level too, but really, it's a pattern which goes far higher than even scenarios. If you want to know more you might find my blog useful, Dan's is even better. Also, Chris has a comic book on Real Options which outlines some of the patterns I've mentioned.

BDD is not about replacing TDD. It is about giving some more structure and deciplene to your TDD practices. The hardest thing about TDD is that developers without the bigger picture hardly have clue on what to test and how much to test. BDD provides a very concrete guideline with this gray area. Check out this post,
http://codingcraft.wordpress.com/2011/11/12/bdd-get-your-tdd-right/

As far as I understand the advantages of BDD over TDD are:
Decoupling the tests from the implementation details. So the feature files won't break, just the step files, if you modify the implementation, but not the behavior.
Reusing existing testing code. You can do the same by TDD, if you define custom assertions, fixtures, helpers, etc... But we (at least I) usually copy-paste testing code (bad habit). It is much easier to reuse code by BDD. There will be still some repetition, but at least it will be in gherkin.
Everything else goes the same way as normally by TDD. So you can use any assertion lib in the step definitions you would use in the unit tests. The only difference that you added another abstraction level by separating what (feature description in gherkin) from how (step definitions in a programming language) in your testing code.

You can use the term "Specification by Example" for BDD, which emphasis an important aspect of this methodology: Specifying collaboratively - through all-team specification workshops, smaller meetings or teleconference reviews. Within these sessions with different stakeholders, concrete examples are used to illustrate requirements. Discussing requirements in the form of examples helps to create a shared understanding of the problem domain and possible solutions.
By accident specifications with examples are well suited for test automation. As a result you usually improve test coverage. But this methodology also helps to involve non-technical stakeholders. The tools that help you create business readable input are by nature not related to programming languages, but often based on simple document formats that are easily understandable by many people.

BDD should emphasize behavior from a user perspective and is ideally suited to drive end-to-end tests, a kind of poor man's DSL for acceptance test driven development. It can complement TDD but it definitely is not a substitute. TDD is as much a design activity as it is a testing activity (Code that is poorly designed is difficult to test -> unit tests encourage good design). BDD has nothing to do with design. It is a kind of testing that abstracts away from the code altogether.
In practice BDD results in a lot more boiler-plate code under the hood than normal acceptance tests, therefore I prefer creating an internal DSL in a normal programming language to drive my acceptance tests. As for unit tests, BDD emphasizes behavior from a user perspective and therefore should not be used at the unit level.
BDD is an attempt to bridge the communication gap between business stake holders and programmers. In some areas it can be useful, such as banking applications where attention to detail on things like interest rate calculations is important, and requires direct input from domain experts. IMHO BDD is not the panacea that some of it's acolytes claim it is and should only be used if there is a compelling reason to do so.

Related

What do I, as a programmer, need to know about Behavior Driven Development?

To put this in context. I like TDD. I like writing my tests first, and expressing what I need my code to do using assertEquals and assertTrue etc.
But everyone seems to be getting with the BDD programme. I see a lot of talk about rSpec and Cucumber and Lettuce. When I look at these, they look overly verbose, almost like Cobol in their naive assumption that somehow writing long "pseudo-English" makes formal specifications legible to the layman.
Some of the writings about BDD make it sound like it's for people who found TDD too hard to do in practice. I don't feel I have this problem. Or, at least, where I have it's been due to problems with doing TDD against databases or in interactive environments, not because I couldn't formulate or prioritise my tests.
So my question is this. What value is BDD for me as a programmer? a) In the context of projects I'm writing for myself (or with other programmers). b) In the context of working with non-technical customers.
For people who've used BDD for a number of projects, what did it buy you over and above TDD?
Are you finding a customers, product or project managers who can write sufficiently rigid test cases in BDD but couldn't write them as ordinary tests?
I've tested BDD on a very simple internal project, then exported it on a complex one.
I found that the main difference is the kind of test you run in BDD.
The BDD outer tests are based on acceptance tests, which do not deals with the classes or any internal code, but relay on testing the system as a whole.
The BDD inner tests are exactly the same unit test you do in TDD.
In this way you can run the same red-green-refactor approach on two levels.
I found external tests extremely helpfull on complex project.
To answer question (a), if you don't have any non-technical stakeholders, then maybe Cucumber isn't appropriate, but are you confident you have sufficient integration testing in place? Unit tests usually aren't enough.
I like this video's discussion on the difference between TDD and BDD: http://channel9.msdn.com/Series/mvcConf/mvcConf-2-Brandom-Satrom-BDD-in-ASPNET-MVC-using-SpecFlow-WatiN-and-WatiN-Test-Helpers (its .NET tools, and not necessarily the .NET tools I use, but the concepts are right)
Generally, you might say it improves your feedback loop by changing it to check if you think the software is implemented as you'd expect (with TDD) to check if the software meets requirements from your users' perspectives (with BDD).

Test Driven Development (TDD) for User Interface (UI) with functional tests

As we know, TDD means "write the test first, and then write the code". And when it comes to unit-testing, this is fine, because you are limited within the "unit".
However when it comes to UI, writing functional tests beforehand makes less sense (to me). This is because the functional tests have to verify a (possibly long) set of functional requirements. This may usually span multiple screens (pages), preconditions like "logged in", "having recently inserted a record", etc.
According to Wikipedia:
Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations.
(Of course, Wikipedia is no "authority", but this sounds very logical.)
So, any thoughts, or better - experience, with functional tests-first for UI, and then code. Does it work? And is it "pain"?
Try BDD, Behavior Driven Development. It promotes writing specification stories which are then executed step by step stimulating the app to change it's state and verify the outcomes.
I use BDD scenarios to write UI code. Business requests are described using BDD stories and then the functionality is being written to turn the stories i.e. tests green.
The key to testing a UI is to separate your concerns - the behavior of your UI is actually different than the appearance of your UI. We struggle with this mentally, so as an exercise take a game like Tetris and imagine porting it from one platform (say PC) to another (the web). The intuition is that everything is different - you have to rewrite everything! But in reality all this is the same:
The rules of the game.
The speed of the blocks falling.
The logic for rows matching
Which block is chosen
And more...
You get the idea. The only thing that changes is how the screen is drawn. So separate how your UI looks from how it works. This is tricky, and usually can't be perfect, but it's close. My recommendation is to write the UI last. Tests will provide the feedback if your behavior is working, and the screen will tell if it looks right. This combination provides the fast feedback we're looking for from TDD without a UI.
TDD for functional tests makes sense to me. Write a functional test, see it failing, then divide the problem into parts, write a unit test for each part, write code for each part, see unit tests pass and then your functional test should pass.
Here is a workflow suggested in the book Test-Driven Development with Python by Harry Percival (available online for free):
P.S. You can automate your functional tests using e.g. Selenium. You can add them to the Continuous Integration cycle as well as unit tests.
ATDD is very useful if you have a very good grasp on how the UI should behave and also ensure that the cost of change to the functional tests that you build upfront is less.
The biggest problem with doing this, of course, is most often the UI is not fully speced out.
For example, if you are building a product, and are still doing rapid iterations of getting a UI that goes through a usability testing and the feedback incorporated, you do not want the baggage of fixing your functional tests with every small change to the UI.
Given that functional tests are generally slow, the feedback cycle is high and its very painful to keep them green with the changes to UI.
I work on a product team where we had this exact same decision to make. A colleague of mine has summarized our final approach very well here. (Disclaimer: Please ignore the tool specific details there.)
I've done Acceptance TDD with a UI. We would assert the common header and footer were used via xpath'ing for the appropriate ids. We also used xpath to assert the data appeared in the correct tag relative to whatever ids we used for basic layout and structure. We would also assert the output page is valid html 4.01 strict.
Programmatic UI tests is a salvation when you want to be confident your UI does what expected. The UI tests are closer to BDD (behavior driven development) than to TDD. But the terminology is cloudy and however you call 'em they are useful!
I have got rather good experience using cucumber. I use it to test flex applications but it's mostly used to test web apps. Click the link! The site have got really nice examples of methodology.
Advanced UI is not compatible with straight TDD.
Use XP practices to guide you.
It is unwise to slavishly follow TDD (or even BDD) for UI, it's worth thinking about the root goals of both practices. In particular, we want to avoid using general purpose TDD tools for UI. (If) They're simply not designed for that use case, you end up binding yourself to tests that you have to iterate on as rapidly as the UI code (I call this, UI test-lock).
That's not the purpose of TDD. We don't do it to go slower, we use to go fast (and NOT break things!).
We want to achieve a few things with TDD:
Code / Algorithmic Correctness
Minimal code to complete a test
Fast feedback
Architectural separation of concerns to the units (modular parts under test, no more, no less)
Formalize specification in a useful/usable way
To some extent, document what the module/system does (e.g. provide an example of use.)
We can achieve these goals in UI development, but how and where we apply practices is important to avoid test-lock.
Applying these principles to UI
What benefits can we get from testing UI? How can we avoid test-lock.
With usual TDD one major benefit is that we can think about the abstractions, and generally design the shape of a test subject, as we think of names, relationships etc.
For UI, much of what we want to do, especially if it's non-trivial, is only reasoned about effectively when we can see it manifested on screen.
Writing tests that describe what we expect to see in a UI, specifically what data, can be TDD tested. For example, if we have a view that should show an account balance, a test that expects it to appear in an on-screen element that can be addressed by some form of ID, is good. We will know our app/view is displaying content that's expected, in standard UI library elements.
If, on the other hand, we want to create a more complex UI and wish to apply TDD to it, we will have some problems. Chances are, we won't even know how to code it, if it's novel enough.
For this we would prototype and iterate with tools which give us fast feedback, but don't saddle us with any need to write tests first. When we reach a point where implementation becomes clear, we can then switch up to test first for the production code.
This keeps us fast. Designers and Product owners in the team can also see results, provide inputs, and tweaks.
The code should be well-formed. As soon as we identify small potential pieces which will make it to production code, we can migrate them to be under test. Using TCR or a similar method to add tests, or simply use the TDD method to re-write. Remember once it works, you can apply snapshot/record-playback testing, to keep regression errors at bay.
Do what works, keep it simple.
Addendum
Notes about the scope of UI discussed above
There will be a need to figure out how to make things work as expected in any advanced UI. There is also a likelihood that specs change and tweak in extremely arbitrary / capricious ways. We need to ensure any tests we apply to these test subjects are flexible, or extremely easy to re-generate based on a working model. (replay tests are gold for this.)
Good development practices, in a nutshell, code separation. Is going to help you ensure the code is at least testable, and simpler to debug, maintain and reason about.
Don't be a slave to ritual.
It doesn't make you correct. Just slower.

When to unit-test vs manual test

While unit-testing seems effective for larger projects where the APIs need to be industrial strength (for example development of the .Net framework APIs, etc.), it seems possibly like overkill on smaller projects.
When is the automated TDD approach the best way, and when might it be better to just use manual testing techniques, log the bugs, triage, fix them, etc.
Another issue--when I was a tester at Microsoft, it was emphasized to us that there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end. Can TDD break this idea and create a situation where a developer might not be the right person to rigorously find their own mistakes? It may be automated, but it would seem that there are many ways to write the tests, and that it is questionable whether a given set of tests will "prove" that quality is acceptable.
The effectiveness of TDD is independent of project size. I will practice the three laws of TDD even on the smallest programming exercise. The tests don't take much time to write, and they save an enormous amount of debugging time. They also allow me to refactor the code without fear of breaking anything.
TDD is a discipline similar to the discipline of dual-entry-bookkeeping practiced by accountants. It prevents errors in-the-small. Accountants will enter every transaction twice; once as a credit, and once as a debit. If no simple errors were made, then the balance sheet will sum to zero. That zero is a simple spot check that prevents the executives from going to jail.
By the same token programmers write unit tests in advance of their code as a simple spot check. In effect, they write each bit of code twice; once as a test, and once as production code. If the tests pass, the two bits of code are in agreement. Neither practice protects against larger and more complex errors, but both practices are nonetheless valuable.
The practice of TDD is not really a testing technique, it is a development practice. The word "test" in TDD is more or less a coincidence. As such, TDD is not a replacement for good testing practices, and good QA testers. Indeed, it is a very good idea to have experienced testers write QA test plans independently (and often in advance of) the programmers writing the code (and their unit tests).
It is my preference (indeed my passion) that these independent QA tests are also automated using a tool like FitNesse, Selenium, or Watir. The tests should be easy to read by business people, easy to execute, and utterly unambiguous. You should be able to run them at a moment's notice, usually many times per day.
Every system also needs to be tested manually. However, manual testing should never be rote. A test that can be scripted should be automated. You only want to put humans in the loop when human judgement is needed. Therefore humans should be doing exploratory testing, not blindly following test plans.
So, the short answer to the question of when to unit-test versus manual test is that there is no "versus". You should write automated unit tests first for the vast majority of the code you write. You should have automated QA acceptance tests written by testers. And you should also practice strategic exploratory manual testing.
Unit tests aren't meant to replace functional/component tests. Unit tests are really focused, so they won't be hitting database, external services, etc. Integration tests does that, but you can have them really focused. The bottom line, is that on the specific question, the answer is that they don't replace those manual tests.
Now, automated functional tests + automated component tests can certainly replace manual tests. It will depend a lot of the project and the approach to it on who will actually do those.
Update 1: Note that if developers are creating automated functional tests you still want to review that those have the appropriate coverage, complementing them as appropriate. Some developers create automated functional tests with their "unit" test framework, because they still have to do smoke tests regardless of the unit tests, and it really helps having those automated :)
Update 2: Unit testing isn't overkill for a small project, nor is automating the smoke tests or using TDD. What is overkill is having the team doing any of that for their first time on the small project. Doing any of those have an associated learning curve (specially unit testing or TDD), and not always will be done right at first. You also want someone who has been doing it for a while involved, to help avoid pitfalls and get pasts some coding challenges that aren't obvious when starting on it. The issue is that it isn't common for teams to have these skills.
TDD is the best approach whenever it is feasible to do so. TDD testing is automatic, quantifiable through code coverage, and reliable method of ensuring code quality.
Manual testing requires a huge amount of time (as compared to TDD) and suffers from human error.
There is nothing saying that TDD means only developers test. Developers should be responsible for coding a percentage of the test framework. QA should be responsible for a much larger portion. Developers test APIs the way they want to test them. QA tests APIs in ways that I really wouldn't have ever thought to and do things that, while seemingly crazy, are actually done by customers.
I would say that unit-tests are a programmers aid to answer the question:
Does this code do what I think it
does?
This is a question they need to ask themselves alot. Programers like to automate anything they do alot where they can.
The separate test team needs to answer a different question:-
Does this system do what I (and the end users) expect it
to do? Or does it suprise me?
There are a whole massive class of bugs related to the programer or designers having a different idea about what is correct that unit tests will never pickup.
According to studies of various projects (1), Unit tests find 15..50% of the defects (average of 30%). This doesn't make them the worst bug finder in your arsenal, but not a silver bullet either. There are no silver bullets, any good QA strategy consists of multiple techniques.
A test that is automated runs more often, thus it will find defects earlier and reduce total cost of these immensely - that is the true value of test automation.
Invest your ressources wisely and pick the low hanging fruit first.
I find that automated tests are easiest to write and to maintain for small units of code - isolated functions and classes. End user functionality is easier tested manually - and a good tester will find many oddities beyond the required tests. Don't set them up against each other, you need both.
Dev vs. Testers Developers are notoriously bad at testing their own code: reasons are psychological, technical and last not least economical - testers are usually cheaper than developers. But developers can do their part, and make testing easier. TDD makes testing an intrinsic part of program construction, not just an afterthought, that is the true value of TDD.
Another interesting point about testing: There's no point in 100% coverage. Statistically, bugs follow an 80:20 rule - the majority of bugs is found in small sections of code. Some studies suggest that this is even sharper - and tests should focuse on the places where bugs turn up.
(1) Programming Productivity Jones 1986 u.a., quoted from Code Complete, 2nd. ed. But as others have said, unit tests are only one part of tests, integration, regression and system tests can be - at leat partially - automated as well.
My interpretation of the results: "many eyes" has the best defect detection, but only if you have some formal process that makes them actually look.
Every application gets tested.
Some applications get tested in the form of does my code compile and does the code appear to function.
Some applications get tested with Unit tests. Some developers are religious about Unit tests, TDD and code coverage to a fault. Like everything, too much is more often than not bad.
Some applications are luckily enough to get tested via a QA team. Some QA teams automate their testing, others write test cases and manually test.
Michael Feathers, who wrote: Working Effectively with Legacy Code, wrote that code not wrapped in tests is legacy code. Until you have experienced The Big Ball of Mud, I don't think any developer truly understands the benefit of good Application Architecture and a suite of well written Unit Tests.
Having different people test is a great idea. The more people that can look at an application the more likely all the scenarios will get covered, including the ones you didn't intend to happen.
TDD has gotten a bad rap lately. When I think of TDD I think of dogmatic developers meticulously writing tests before they write the implementation. While this is true, what has been overlooked is by writing the tests, (first or shortly after) the developer experiences the method/class in the shoes of the consumer. Design flaws and shortcomings are immediately apparent.
I argue that the size of the project is irrelevant. What is important is the lifespan of the project. The longer a project lives the more the likelihood that a developer other than the one who wrote it will work on it. Unit tests are documentation to the expectations of the application -- A manual of sorts.
Unit tests can only go so far (as can all other types of testing). I look on testing as a kind of "sieve" process. Each different type of testing is like a sieve that you are placing under the outlet of your development process. The stuff that comes out is (hopefully) mostly features for your software product, but it also contains bugs. The bugs come in lots of different shapes and sizes.
Some of the bugs are pretty easy to find because they are big or get caught in basically any kind of sieve. On the other hand, some bugs are smooth and shiny, or don't have a lot of hooks on the sides so they would slip through one type of sieve pretty easily. A different type of sieve might have different shape or size holes so it will be able to catch different types of bugs. The more sieves you have, the more bugs you will catch.
Obviously the more sieves you have in the way, the slower it is for the features to get through as well, so you'll want to try to find a happy medium where you aren't spending too much time testing that you never get to release any software.
The nicest point (IMO) of automated unit tests is that when you change (improve, refactor) the existing code, it's easy to test that you didn't break it. It would be tedious to test everything manually again and again.
Your question seems to be more about automated testing vs manual testing. Unit testing is a form of automated testing but a very specific form.
Your remark about having separate testers and developers is right on the mark though. But that doesn't mean developers shouldn't do some form of verification.
Unit testing is a way for developers to get fast feedback on what they're doing. They write tests to quickly run small units of code and verify their correctness. It's not really testing in the sense you seem to use the word testing just like a syntax check by a compiler isn't testing. Unit testing is a development technique. Code that's been written using this technique is probably of higher quality than code written without but still has to go through quality control.
The question about automated testing vs manual testing for the test department is easier to answer. Whenever the project gets big enough to justify the investment of writing automated tests you should use automated tests. When you've got lots of small one-time tests you should do them manually.
Having been on both sides, QA and development, I would assert that someone should always manually test your code. Even if you are using TDD, there are plenty of things that you as a developer may not be able to cover with unit tests, or may not think about testing. This especially includes usability and aesthetics. Aesthetics includes proper spelling, grammar, and formatting of output.
Real life example 1:
A developer was creating a report we display on our intranet for managers. There were many formulas, all of which the developer tested before the code came to QA. We verified that the formulas were, indeed, producing the correct output. What we asked development to correct, almost immediately, was the fact that the numbers were displayed in pink on a purple background.
Real life example 2:
I write code in my spare time, using TDD. I like to think I test it thoroughly. One day my wife walked by when I had a message dialog up, read it, and promptly asked, "What on Earth is that message supposed to mean?" I thought the message was rather clear, but when I reread it I realized it was talking about parent and child nodes in a tree control, and probably wouldn't make sense to the average user. I reworded the message. In this case, it was a usability issue, which was not caught by my own testing.
unit-testing seems effective for larger projects where the APIs need to be industrial strength, it seems possibly like overkill on smaller projects.
It's true that unit tests of a moving API are brittle, but unit-testing is also effective on API-less projects such as applications. Unit-testing is meant to test the units a project is made with. It allows ensuring every unit works as expected. This is a real safety net when modifying - refactoring - the code.
As far as the size of the project is concerned, It's true that writing unit-tests for a small project can be overkill. And here, I would define small project as a small program, that can be tested manually, but very easily and quickly, in no more than a few seconds. Also a small project can grow, in which case it might be advantageous to have unit tests at hand.
there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end.
Whatever the development process, unit-testing is not meant to supersede any other stages of test, but to complement them with tests at the development level, so that developers can get very early feedback, without having to wait for an official build and official test. With unit-testing, development team delivers code that works, downstream, not bug-free code, but code that can be tested by the test team(s).
To sum up, I test manually when it's really very easy, or when writing unit tests is too complex, and I don't aim to 100% coverage.
I believe it is possible to combine the expertise of QA/testing staff (defining the tests / acceptance criteria), with the TDD concept of using a developer owned API (as oppose to GUI or HTTP/messaging interface) to drive an application under test.
It is still critical to have independent QA staff, but we don't need huge manual test teams anymore with modern test tools like FitNesse, Selenium and Twist.
Just to clarify something many people seem to miss:
TDD, in the sense of
"write failing test, write code to make test pass, refactor, repeat"
Is usually most efficient and useful when you write unit tests.
You write a unit test around just the class/function/unit of code you are working on, using mocks or stubs to abstract out the rest of the system.
"Automated" testing usually refers to higher level integration/acceptance/functional tests - you can do TDD around this level of testing, and it's often the only option for heavily ui-driven code, but you should be aware that this sort of testing is more fragile, harder to write test-first, and no substitute for unit testing.
TDD gives me, as the developer, confidence that the change I am making to the code has the intended consequences and ONLY the intended consequences, and thus the metaphor of TDD as a "safety net" is useful; change any code in a system without it and you can have no idea what else you may have broken.
Engineering tension between developers and testers is really bad news; developers cultivate a "well, the testers are paid to find the bugs" mindset (leading to laziness) and the testers -- feeling as if they aren't being seen to do their jobs if they don't find any faults -- throw up as many trivial problems as they can. This is a gross waste of everyone's time.
The best software development, in my humble experience, is where the tester is also a developer and the unit tests and code are written together as part of a pair programming exercise. This immediately puts the two people on the same side of the problem, working together towards the same goal, rather than putting them in opposition to each other.
Unit testing is not the same as functional testing. And as far as automation is concerned, it should normally be considered when the testing cycle will be repeated more than 2 or 3 times... It is preferred for regression testing. If the project is small or it will not have frequent changes or updates then manual testing is a better and less cost effective option. In such cases automation will prove to be more costly with the script writing and maintainence.

Do you think Unit Tests are a good way to show your fellow programmers how to use an API?

Do you think Unit Tests are a good way to show your fellow programmers how to use an API?
I was listening to the Stackoverflow Podcast this week and I now realize that unit testing is not appropriate in all situations (I.E. it can cost you time if you go for 100% code-coverage). I agree with this as I have suffered from the "OCD code coverage disorder in the past), and have now mended my ways.
However to further appropriate my knowledge of the subject, I'd like to know if unit testing is a good way to bring in new programmers that are unfamiliar with the project's APIs. (It sure seems easier than just writing documentation...although I like it when there's documentation later...)
I think Unit testing is a fantastic way to document APIs. It doesn't necessarily replace good commenting or API docs but it is a very practical way to help people get into the nitty gritty of your code. Moreover good unit testing promotes good design, so chances are your code will be easier to understand as a result.
Unit testing, IMO, isn't a substitute for documentation in any way. You write tests to find and exercise the corner cases of your code, to make sure that all boundary conditions are met. These are usually not the most appropriate examples to give to someone who is trying to learn how the method works.
You also typically don't give as much explanation of why what's happening is happening in a unit test.
Finally, the contents of unit tests typically aren't as accessable to documentation generation tools, and are usually separated from the code they're testing (oddities like Python's doctests notwithstanding).
Good documentation usually includes good examples. It's hard for me to imagine a better set of examples than exactly the ones that show what's expected of a correct implementation!
In addition, maintenance is a crucial issue. It's good practice to deal with a defect by adding a test that exposes the defect, and then by making that test succeed without failing prior tests. If unit tests are regarded as part of the documentation, then this new test will be part of the documentation's change log, thus helping subsequent developers learn from the experience.
No. Tests can be used as a secondary reference, they at least have the benefit of being examples that actually compile, but they are not a substitute for good API documentation.
Beware of zealots claiming almost mystical powers for unit tests saying things like "the tests are the design", "the tests are the specification", "the tests are the documentation". No. The tests are the tests and they don't give you an excuse to cut corners elsewhere.
Documentation should not be substituted with unit test code, period.
Documentation is geared towards the programmer that is trying to use, and understand, your API.
Unit tests are geared towards covering corner cases, that although they should be documented, should not be the focus for a new user of your API.
Documentation should have a gradual build-up of complexity, unit tests should be as simple as possible yet as complex as necessary in order to test the functionality.
For instance, you might have many unit tests that look very alike, but have minute differences, just to cover various oddball corner cases and assert the correct behavior.
Rather than having the user of your API decipher these unit tests, figure out the differences, and figure out why this should produce different (or possibly the same) behavior is not a good teaching aid.
Unit tests are for maintainers of your api, the programmers that will fix bugs in it, or add new features to it, or refactor it, or ....
Documentation is for programmers that will use your api.
These are not the same target audiences.
One such subtle difference might be that while a unit test asserts that when a function gets passed a negative value, it will do something specific, the documentation would instead go into details about why that particular solution was picked. If all the documentation does is just rewrite the code into english, then there is not much point in it, so documentation usually is a lot more verbose and explanatory than the code.
Although not a replacement, Unit Tests can be an excellent way of demonstrating the use of a class or API, particularly for library functions where the amount of code to perform tests is minimal.
For example out Math library is well unit-tested. If someone has a question (e.g. how to find information about ray/volume intersections) then the first-place I send them are the unit tests.
(Why do we test something that could be considered so stable? Well for one thing we support five platforms and gradually add platform-specific SIMD implementations based on benchmarking, for another the unit tests provide an excellent framework for adding new platforms or functionality).
FWIW I thought this weeks Podcasts discussion of unit testing was bang-on. Unit Testing (and Test Driven Development) is a great method for developing the components of your application, but for testing the application itself you quickly reach a point where the code needed for testing becomes disproportionate and brittle.

What are the primary differences between TDD and BDD? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Test Driven Development has been the rage in the .NET community for the last few years. Recently, I have heard grumblings in the ALT.NET community about BDD. What is it? What makes it different from TDD?
I understand BDD to be more about specification than testing. It is linked to Domain Driven Design (don't you love these *DD acronyms?).
It is linked with a certain way to write user stories, including high-level tests. An example by Tom ten Thij:
Story: User logging in
As a user
I want to login with my details
So that I can get access to the site
Scenario: User uses wrong password
Given a username 'jdoe'
And a password 'letmein'
When the user logs in with username and password
Then the login form should be shown again
(In his article, Tom goes on to directly execute this test specification in Ruby.)
The pope of BDD is Dan North. You'll find a great introduction in his Introducing BDD article.
You will find a comparison of BDD and TDD in this video. Also an opinion about BDD as "TDD done right" by Jeremy D. Miller
March 25, 2013 update
The video above has been missing for a while. Here is a recent one by Llewellyn Falco, BDD vs TDD (explained). I find his explanation clear and to the point.
To me primary difference between BDD and TDD is focus and wording. And words are important for communicating your intent.
TDD directs focus on testing. And since in "old waterfall world" tests come after implementation, then this mindset leads to wrong understanding and behaviour.
BDD directs focus on behaviour and specification, and so waterfall minds are distracted. So BDD is more easily understood as design practice and not as testing practice.
There seem to be two types of BDD.
The first is the original style that Dan North discusses and which caused the creation of the xBehave style frameworks. To me this style is primarily applicable for acceptance testing or specifications against domain objects.
The second style is what Dave Astels popularised and which, to me, is a new form of TDD which has some serious benefits. It focuses on behavior rather than testing and also small test classes, trying to get to the point where you basically have one line per specification (test) method. This style suits all levels of testing and can be done using any existing unit testing framework though newer frameworks (xSpec style) help focus one the behavior rather than testing.
There is also a BDD group which you might find useful:
http://groups.google.com/group/behaviordrivendevelopment/
Test-Driven Development is a test-first software development methodology, which means that it requires writing test code before writing the actual code that will be tested. In Kent Beck’s words:
The style here is to write a few lines of code, then a test that
should run, or even better, to write a test that won't run, then write
the code that will make it run.
After figuring out how to write one small piece of code, now, instead of just coding on, we want to get immediate feedback and practice "code a little, test a little, code a little, test a little." So we immediately write a test for it.
So TDD is a low-level, technical methodology that programmers use to produce clean code that works.
Behaviour-Driven Development is a methodology that was created based on TDD, but evolved into a process that doesn’t concern only programmers and testers, but instead deals with the entire team and all important stakeholders, technical and non-technical. BDD started out of a few simple questions that TDD doesn’t answer well: how much tests should I write? What should I actually test—and what shouldn’t I? Which of the tests I write will be in fact important to the business or to the overall quality of the product, and which are just my over-engineering?
As you can see, such questions require collaboration between technology and business. Business stakeholders and domain experts often can tell engineers what kind of tests sound like they would be useful—but only if the tests are high-level tests that deal with important business aspects. BDD calls such business-like tests “examples,” as in “tell me an example of how this feature should behave correctly,” and reserves the word “test” for low-level, technical checks such as data validation or testing API integrations. The important part is that while tests can only be created by programmers and testers, examples can be collected and analysed by the entire delivery team—by designers, analysts, and so on.
In a sentence, one of the best definitions of BDD I have found so far is that BDD is about “having conversations with domain experts and using examples to gain a shared understanding of the desired behaviour and discover unknowns.” The discovery part is very important. As the delivery team collects more examples, they start to understand the business domain more and more and thus they reduce their uncertainty about some aspects of the product they have to deal with. As uncertainty decreases, creativity and autonomy of the delivery team increase. For instance, they can now start suggesting their own examples that the business users didn’t think were possible because of their lack of tech expertise.
Now, having conversations with the business and domain experts sounds great, but we all know how that often ends up in practice. I started my journey with tech as a programmer. As programmers, we are taught to write code—algorithms, design patterns, abstractions. Or, if you are a designer, you are taught to design—organize information and create beautiful interfaces. But when we get our entry-level jobs, our employers expect us to "deliver value to the clients." And among those clients can be, for example... a bank. But I could know next to nothing about banking—except how to efficiently decrease my account balance. So I would have to somehow translate what is expected of me into code... I would have to build a bridge between banking and my technical expertise if I want to deliver any value. BDD helps me build such a bridge on a stable foundation of fluid communication between the delivery team and the domain experts.
Learn more
If you want to read more about BDD, I wrote a book on the subject. “Writing Great Specifications” explores the art of analysing requirements and will help you learn how to build a great BDD process and use examples as a core part of that process. The book talks about the ubiquitous language, collecting examples, and creating so-called executable specifications (automated tests) out of the examples—techniques that help BDD teams deliver great software on time and on budget.
If you are interested in buying “Writing Great Specifications,” you can save 39% with the promo code 39nicieja2 :)
I have experimented a little with the BDD approach and my premature conclusion is that BDD is well suited to use case implementation, but not on the underlying details. TDD still rock on that level.
BDD is also used as a communication tool. The goal is to write executable specifications which can be understood by the domain experts.
With my latest knowledge in BDD when compared to TDD, BDD focuses on specifying what will happen next, whereas TDD focuses on setting up a set of conditions and then looking at the output.
Behaviour Driven Development seems to focus more on the interaction and communication between Developers and also between Developers and testers.
The Wikipedia Article has an explanation:
Behavior-driven development
Not practicing BDD myself though.
Consider the primary benefit of TDD to be design. It should be called Test Driven Design. BDD is a subset of TDD, call it Behaviour Driven Design.
Now consider a popular implementation of TDD - Unit Testing. The Units in Unit Testing are typically one bit of logic that is the smallest unit of work you can make.
When you put those Units together in a functional way to describe the desired Behaviour to the machines, you need to understand the Behaviour you are describing to the machine. Behaviour Driven Design focuses on verifying the implementers' understanding of the Use Cases/Requirements/Whatever and verifies the implementation of each feature. BDD and TDD in general serves the important purpose of informing design and the second purpose of verifying the correctness of the implementation especially when it changes. BDD done right involves biz and dev (and qa), whereas Unit Testing (possibly incorrectly viewed as TDD rather than one type of TDD) is typically done in the dev silo.
I would add that BDD tests serve as living requirements.
It seems to me that BDD is a broader scope. It almost implies TDD is used, that BDD is the encompassing methodology that gathers the information and requirements for using, among other things, TDD practices to ensure rapid feedback.
In short there is major difference between TDD and BDD
In TDD we are majorly focused on Test data
In BDD our main focus is on behavior of the project so that any non - programming person can understand the line of code on the behalf of the title of that method
There is no difference between TDD and BDD. except you can read your tests better, and you can use them as requirements. If you write your requirements with the same words as you write BDD tests then you can come from your client with some of your tests defined ready to write code.
Here's the quick snapshot:
TDD is just the process of testing code before writing it!
DDD is the process of being informed about the Domain before each cycle of touching code!
BDD is an implementation of TDD which brings in some aspects of DDD!