Programming Workflow with Unit Testing but Without TDD - unit-testing

I always had the notion that when you do unit testing, you have to do TDD. However, after months of reading about it, it turns out that is not the case. The standard TDD workflow goes like write tests first, then write the code to satisfy your tests, then iterate.
The problem is, I can't seem to imagine how to do unit testing but not following the TDD workflow. Do you know of any way I can do this?

TDD is first doing small design steps first, for example thinking up your "desired" method signature, then writing a test for it, then implementing it till green, then refactoring if necessary.
This workflow makes sense (to me anyway).
Alternatively, you can write test afterwards to "prove" some confidence. But you are missing the small design steps and then the emerging aspect, by design, implement to green, refactor, your design/implementation is emerging and at the same time you have your test to prove correct behavior.
These are my thoughts but if you feel this doesn't suit your question then please be more specific about your needs.

Working with unit testing without TDD can be as simple as writing a few methods, then writing some unit tests that prove they do what you think they should. Once the tests are in place you can optionally refactor you code to improve the implementation, safe in the knowledge that if you introduce any bugs the tests should catch them. Alternatively, you could write all the code, then all the tests, but this has the disadvantage that if some of the tests break it could have been ages since you wrote the code, and so finding and fixing the problems will be harder.
TDD has a three stage cycle: Red (write failing a test), Green (write the code necessary to get that test to pass) and Refactor (aggressively refactor the code to improve code design).
A couple of advantages of TDD are that by write the tests first you've got proof they can fail (otherwise you could have accidentally written a test which will ALWAYS pass) and you're writing the tests agnostic of how you're writing the code (so you're less likely to write tests that contain the same logic as the production code - which is pointless).

Related

What is unit testing, and does it require code being written?

I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.

TDD vs. Unit testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s).
There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits?
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written?
EDIT
What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input.
FOLLOW UP
We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!
If the team is floundering at implementing TDD, but they weren't creating any Unit Tests before...then start them off by creating Unit Tests after their code is written. Even Unit tests written after the code are better than no Unit Tests at all!
Once they're proficient at Unit Testing (and everything that comes with it), then you can work on getting them to create the tests first...and code second.
It is absolutely still worth writing the unit tests after code is written. It's just that sometimes it's often harder because your code wasn't designed to be testable, and you may have overcomplicated it.
I think a good pragmatic way to bring a team into TDD is to provide the alternative method of "test-during development" in the transition period, or possibly in the long-term. They should be encouraged to TDD sections of code that seem natural to them. However, in sections of code that seem difficult to approach test-first or when using objects that are predetermined by a non-agile A&D process, developers can be given the option of writing a small section of the code, then writing tests to cover that code, and repeating this process. Writing unit tests for some code immediately after writing that code is better than not writing any unit tests at all.
It's in my humble opinion better to have 50% test coverage with "code first, test after" and a 100% completed library, than 100% test coverage and a 50% completed library with TDD. After a while, your fellow developers will hopefully find it entertaining and educational to write tests for all of the public code they write, so TDD will sneak its way into their development routine.
TDD is about design! So if you use it, you will be sure to have a testable design of your code, making it easier to write your tests. If you write tests after the code is written they are still valuable but IMHO you will be wasting time since you will probably not have a testable design.
One suggestion I can give to you to try to convince your team to adopt TDD is using some of the techniques described in Fearless Change: Patterns for Introducing New Ideas, by Mary Lynn Manns and Linda Rising.
I just read this on a calendar: "Every rule, executed to its utmost, becomes ridiculous or even dangerous." So my suggestion is not to be religious about it. Every member of your team must find a balance between what they feel "right" when it comes to testing. This way, every member of your team will be most productive (instead of, say, thinking "why do I have to write this sti**** test??").
So some tests are better than none, tests after the code are better than few tests and testing before the code is better than after. But each step has its own merits and you shouldn't frown upon even small steps.
If they're new to testing than IMO start off testing code that's already been written and slowly graduate to writing tests first. As someone trying to learn TDD and new to unit testing, I've found it kind of hard to do a complete 180 and change my mindset to write tests before code, so the approach I'm taking is sort of a 50-50 mix; when I know exactly what the code will look like, I'll write the code and then write a test to verify it. For situations where I'm not entirely sure then I'll start with a test and work my way backwards.
Also remember that there is nothing wrong with writing tests to verify code, instead of writing code to satisfy tests. If your team doesn't want to go the TDD route then don't force it on them.
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
There is absolutely no doubt about the fact that there is value in unit tested code (regardless of when tests were written) and I include "the code is unit tested" in the "Definition of Done". People may use TDD or not, as long as they test.
Regarding version control, I like to use "development branches" with a unit tested policy (i.e. the code compiles and builds, all unit tests pass). When features are done, they are published from development branches to the trunk. In other words, the trunk branch is the "Done branch" (No junk on the trunk!) and has a shippable policy (can release at any time) which is more strict and includes more things than "unit tested".
This is something that your team will have to have its own successes with before they begin to believe it in. I'll rant about my nUnit epiphany for anyone who cares:
About 5 years ago I discovered nUnit when working on a project. We had almost completed V1.0 and I created a few tests just to try out this new tool. We had a lot of bugs (obviously!) because we were a new team, on a tight deadline, high expectations (sound familiar?) etc. Anyway we got 1.0 in and started on 1.1. We re-orged the team a little bit and I got 2 devs assigned to me. I did a 1-hour demo for them and told them that everything we wrote had to have a test case with it. We constantly ran "behind" the rest of the team during the 1.1 dev cycle because we were writing more code, the unit tests. We ended up working more but here's the payoff -- when we finally got into testing we had exactly 0 bugs in our code. We helped everyone else debug and repair their bugs. In the postmortem, when the bug counts showed up, it got EVERYONE's attention.
I'm not dumb enough to think you can test your way to success but I am a true believer when it comes to unit tests. The project adopted nUnit and it soon spread to the company for all .Net projects as a result of 1 success. Total time period for our V1.1 release was 9 dev weeks so it was definitely NOT an overnight success. But long term, it proved successful for our project and the company we built solutions for.
There is no doubt that testing (First, While or even After) will save your bacon, and improve your productivity and confidence. I recommend adopting it!
I was in a similar situation, because I was a "noob" developer, I was often frustrated when working on team project by the fact that a contribution had broken the build. I did not know if I was to blame or even in some cases, who to blame. But I was more concerned that I was doing to same thing to my fellow developers. This realisation then motivated to adopt some TDD strategies. Our team started have silly games, and rules, like you cannot go home till all your tests pass, or if you submit something without a test, then you have to buy everyone "beer/lunch/etc" and it made TDD more fun.
One of the most useful aspect of unit testing is ensuring the continuing correctness of already working code. When you can refactor at will, let an IDE remind you of compile time errors, and then click a button to let your tests spot any potential runtime errors--sometimes arriving in previously trivial blocks of code, then I think you will find your team starting to appreciate TDD. So starting with testing existing code is definitely useful.
Also, to be blunt, I have learned more about how to write testable code by trying to test written code than from starting with TDD. It can be too abstract at first if you are trying to think of contracts that will both accomplish the end goal and allow testing. But when you look at code and can say "This singleton here completely spoils dependency injection and makes testing this impossible," you start to develop an appreciation for what patterns make your testing life easier.
Well, if you do not write tests firsts it's not "Test Driven", it's just testing. It has benefits in itself and if you allready have a code base adding tests for it is certainly usefull even if it's not TDD but merely testing.
Writing tests first is about focusing on what the code should do before writing it. Yes you also get a test doing that and it's good, but some may argue it's not even the most important point.
What I would do is train the team on toy projects like these (see Coding Dojo, Katas) using TDD (if you can get experienced TDD programmers to participate in such workshop it would be even better). When they'll see the benefits they will use TDD for the real project. But meanwhile do not force them, it they do not see the benefit they won't do it right.
If you have design sessions before writing code or have to produce a design doc, then you could add Unit Tests as the tangible outcome of a session.
This could then serve as a specification as to how your code should work. Encourage pairing on the design session, to get people talking about how something should work and what it should do in given scenarios. What are the edge cases, with explicit test cases for them so everyone knows what it's going to do if given a null argument for example.
An aside but BDD also may be of interest
You may find some traction by showing an example or two where TDD results in less code being written - because you only write code required to make the test pass, the temptation to gold-plate or engage in YAGNI is easier to resist. Code you don't write doesn't need to be maintained, refactored, etc, so it's a "real savings" that can help sell the concept of TDD.
If you can clearly demonstrate the value in terms of time, cost, code and bugs saved, you may find it's an easier sell.
Starting to build JUnit test classes is the way to start, for existing code it's the only way to start. In my experience it is very usefull to create test classes for existing code. If management thinks this will invest too much time, you can propose to only write test classes when the corresponding class is found to contain a bug, or is in need of cleanup.
For the maintenance process the approach to get the team over the line would be to write JUnit tests to reproduce bugs before you fix them, i.e.
bug is reported
create JUnit test class if needed
add a test that reproduces the bug
fix your code
run the test to show the current code does not reproduce the bug
You can explain that by "documenting" bugs in this way will prevent those bugs from creeping back in later. That is a benefit the team can experience immediately.
I have done this in many organizations and I have found the single best way to get TDD started and followed is to set up pair programming. If you have someone else you can count on that knows TDD then the two of you can split up and pair with other developers to actually do some paired programming using TDD. If not I would train someone who will help you to do this before presenting it to the rest of the team.
One of the major hurdles with unit testing and especially TDD is that developers don't know how to do it, so they can not see how it can be worth their time. Also when you first start out, it is much slower and doesn't seem to provide benefits. It is only really providing you benefits when you are good at it. By setting up paired programming sessions you can quickly get developers to be able to learn it quickly and get good at it quicker. Additionally they will be able to see immediate benefits from it as you work though it together.
This approach has worked many times for me in the past.
One powerful way to discover the benefits of TDD is to do a significant rewrite of some existing functionality, perhaps for performance reasons. By creating a suite of tests that do a good job covering all the functionality of the existing code, this then gives you the confidence to refactor to your heart's content with full confidence that your changes are safe.
Note that in this case I'm talking about testing the design or contract - unit tests that test implementation details will not be suitable here. But then again, TDD can't test implementation by definition, as they are supposed to be written before the implementation.
TDD is a tool that developers can use to produce better code. I happen to feel that the exercise of writing testable code is as least as valuable as the tests themselves. Isolating the IUT (Implementation Under Test) for testing purposes has the side affect of decoupling your code.
TDD isn't for everyone, and there's no magic that will get a team to choose to do it. The risk is that unit test writers that don't know what's worth testing will write a lot of low value tests, which will be cannon fodder for the TDD skeptics in your organization.
I usually make automated Acceptance Tests non-negotiable, but allow developers to adopt TDD as it suits them. I have my experienced TDDers train/mentor the rest and "prove" the usefulness by example over a period of many months.
This is as much a social/cultural change as it is a technical one.

TDD. test first anyway?

Are you doing test first anyway? Or in some cases you are doing some coding and then writing your tests to make sure code works? As for me I prefer to create a class. Sure, during class creation I think about its interface and how to test the class. But I dont write testing code first. Do you write it first? Do you think you should always write test code first?
I'm not a purist in this matter (TDD involves more than just writing the tests first, it's also about initially writing very minimal, "hard coded" tests and refactoring them a lot -- see The Book by The Master himself).
I tend to test-first when I'm doing incremental development to add a feature to an existing module, and I insist on test-first when the incremental development I'm doing is to fix a bug (in the latter case I absolutely want a unit-test AND an integration-test that both reproduce the bug, before I fix the code that caused the bug).
I tend to be laxer when I'm doing "greenfield" development, especially if that's of an exploratory, "let's see what we can do here that's useful", nature -- which does happen, e.g. in data mining and the like -- you have a somewhat vague idea that there might be a useful signal buried in the data, some hypothesis about its possible nature and smart ways to [maybe] extract it -- the tests won't help until the exploration has progressed quite a bit.
And, once I start feeling happy with what I've got, and thus start writing tests, I don't necessarily have to redo the "exploratory" code from scratch (as I keep it clean and usable as I go, not too hard to do especially in Python, but also in R and other flexible languages).
Test-driven development, by definition, is writing your tests first. If you create your class first, the subsequent tests you write can be called Unit Tests, but it is not TDD.
There are many who say that writing your tests first improves code quality. I am inclined to agree, provided there is some effort put into the software design beyond just writing tests and making them pass.
If you are refactoring an existing legacy system, it is a good idea to wrap the functionality of that system in a suite of tests prior to refactoring. That way, you know if your code changes break something.
In my humble opinion, Test Driven Development is something more than writing unit tests first. Test first TDD is more about putting yourself in the mindset of thinking about what exactly it is you are trying to achieve with the code you are about to write.
What should your acceptance criteria be?
When will your code be 'done' and how is done defined?
Writing unit tests before code is only one of the ways to formalize such acceptance criteria. The biggest benefit in my view of Test first TDD is that you give a good hard think and document (in unit tests, on paper, on the white board) what the acceptance criteria are for the feature/story you are implementing. Having such documentation also helps define the scope of done.
So, whether you code a function first and then write the unit test for it or you decide to first write the test and then code the function is of secondary nature. Once you've thought out and documented your acceptance criteria, your benefit will be that your target is clearer and you're more likely to focus on fulfilling the acceptance criteria minimizing any 'noise' (e.g. feature x would be nice to add, am I 'done' yet).
Of course this does not mean that we go ahead and write code leaving it untested and attempt to retrofit unit tests when we've already coded 4 classes for example! I just think that there is no need to be 100% dogmatic in writing unit tests before actual code as the benefit in TDD lies elsewhere, but in any case our unit tests should keep up with our growing code base.
Lets be clear here, that TDD was designed for production code. If you want to follow the rules for TDD for production code, then the first thing you write is your first test.
I write my first test (before writing a line of production code), which may want to instantiate a new class and running it produces some sort of error, which I fix by writing the smallest amount of production code to get the test running.
If you want to do something else, then you can make up your own rules: You can write whatever code and/or tests you like to explore any designs or patterns or anything.
However, if you then want to write production code based on what you have learned from your experiments, you may well be better placed - but technically you should really throw away your experiments - to write your first test.

How to start unit testing or TDD?

I read a lot of posts that convinced me I should start writing unit test, I also started to use dependency injection (Unity) for the sake of easier mocking, but I'm still not quite sure on what stage I should start writing the unit tests and mocks, and how or where to start.
Would the preferred way be writing the unit tests before the methods as described in the TDD methodology?
Is there any different methodology or manner for unit testing?
Test first / test after:
It should be noted that 'test first' as part of TDD is just as much (if not more) to do with design as it is to do with unit testing. It's a software development technique in its own right -- writing the tests results in a constant refining of the design.
On a separate note: If there is one significant advantage to TDD from a pure unit testing perspective, it is that it is much harder (though not impossible) to write a test that's wrong when doing TDD. If you write the test beforehand, it should always fail because the logic required to make the test pass does not yet exist. If you write the test afterwards, the logic should be there, but if the test is bugged or is testing the wrong thing, it may pass regardless.
I.e. if you write a bad test before, you may get a green light when you expect a red (so you know the test is bad). If you write a bad test afterwards, you will get a green light when you expected a green (unaware of the bad test).
Books
The pragmatic unit testing book is well worth a look, as is Roy Osherove's "The Art of Unit Testing". The pragmatic book is more narrowly focussed on the different types of test inputs you can try to find bugs, whereas TAOUT covers a wider spread of topics such as test doubles, strategies, maintainability etc. Either book is good; it depends what you want from it.
Also, here's a link to a talk Roy Osherove did on unit testing. It's worth a watch (so are some of the test review videos he recorded, as he points out various problems and dos/don'ts along with reasons why).
How to start
There's nothing better than writing code. Find a fairly simple class that doesn't reference much else. Then, start writing some tests.
Always ask yourself "what do I want to try and prove with this test?" before you write it, then give it a decent name (usually involving the method being called, the scenario and the expected result, e.g. on a stack: "Pop WhenStackIsEmpty ThrowsException").
Think of all the inputs you can throw at it, different combinations of methods that may yield interesting results and so forth.
If you are curious about unit testing the best way to learn it is try it. You will probably start writing integration tests at first, but that is fine. When they seem too difficult to maintain or too much work to write, read more about what kind of tests there are (unit, functional, integration) and try to learn the difference.
If you are interested in starting with TDD, Uncle Bob is a good source. Particalulary this.
More on unit testing
Ensure that you get consistent test results. Running the same test repeatedly should return the same results consistently.
The tests should not require configuration.
Testing order should not matter. This means partial test runs can work correctly. Also, if you keep this design philosophy in mind, it will likely aid in your test creation.
Remember that any testing is better than no testing. The difficulty can be found in writing good clean unit tests that promote ease of creation and maintenance. The more difficult the testing framework, the more opposition there will be to employing it. Testing is your friend.
In C# and with visual studio I find following procedure very helpful:
Think! Make a small upfront design. You need to have a clear picture what classes you need and how objects should relate with each other. Concentrate only on one class/object (the class/object you want to implement) and one relationship. Otherwise you end up with a too heavyweight design. I often end up with multiple sketches (only a few boxes and arrows) on a spare sheet of paper.
Create the class in the production code and name it appropriately.
Pick one behaviour of the class you want to implement and create a method stub for it. With visual studio creating empty method stubs is a piece of cake.
Write a test for it. Therefor you will need to initialize that object, call the method and make an assert to verify the result. In the easiest case the assertion requires another method stub or a property in the production code.
Compile and let the test runner show you the red bar!
Code the required behavior in the production code to see the green bar appear.
Move to the next behaviour.
For this procedure two things are very important:
You need a clear picture what you want and how the class/object should look like. At least spend some time one it.
Think about behaviours of the class/object. This will make the tests and the production code behaviour-centric, which is a very natural approach to think about classes/objects.
Test first or not test first?
I find it usually harder to retrofitting tests to existing production code. In most cases this is due to tangled dependencies to other objects, when the object which is under test needs to be initialized. TDD usually avoids this, because you want to initialize the object in the test cases with less effort as possible. This will result to a very loose coupling.
When I retrofit tests, the must cumbersome job is the task of initializing the object under test. Also the assertions may be a lot of work because of tangled dependencies. For this you will need to change the production code and break dependencies. By using dependency injection properly this should not be an issue.
Yes, the preferred way of doing TDD is to write the test first (as implied by the name Test-Driven Development). When you start out with TDD it can be hard to know where to start writing the tests, so I would suggest to be pragmatic about it. After all, the most important part of our job is about delivering working code, not so much how the code was crafted.
So you can start by writing tests for existing code. Once you get a hang of how the unit tests are structured, which ones that seem to do a good job and which ones that seem not so god, then you will find it easier to dive more into the test-first approach. I have found that I write tests first to a greater extent as time goes by. It simply becomes more natural with increased experience.
Steve Sanderson has a great writeup on TDD best practices.
http://feeds.codeville.net/~r/SteveCodeville/~3/DWmOM3O0M2s/
Also, there's a great set of tutorials for doing an ASP.net mvc project that discusses a lot TDD principles (if you don't mind learning ASP.net MVC along the way)
http://www.asp.net/learn/mvc-videos/ Look for the "Storefront" series at the bottom of the page.
MOQ seems to be the hot mocking framework lately, you may want to look into that as well
In summary, try to write a test to validate something you'r trying to archive, then implement the code to make it work.
The pattern is known as Red - Green - Refactor. Also do your best to minimize dependencies so that your tests can focus on one component.
Personally, I use Visual Studio Unit Tests. I'm not a hardcore TDD developer, but what i like to do is this:
Create a new project and define a few of the fundamental classes based on the system design (that way I can at least get some intellisense)
create a unit tests project and start writing unit tests to satisfy the functionality i'm trying to implement.
Make them fail
Make them pass (implement)
Refactor
Repeat, try to make the test more stringent or create more tests until i feel its solid.
I also feel its very useful to add functionality onto an exiting code base. If you want to add some new feature, first create the unit test for what you want to add, step through the code to see what you have to change, then go through the TDD process.
Choose a small non-critical application and implement it using TDD. At first the new way of thinking will feel weird, but maybe after a week or two practice it fill feel natural.
Here is a tutorial application (in branch "tutorial") that shows what kinds of tests to write. In that tutorial you write code to pass the predefined test cases, so that you get into the rhythm, and later you then write your own tests. The README file contains instructions. It's written in Java, but you can easily adapt it to C#.
I would take on TDD, test-first development, before mocks and dependency injection. To be sure, mocks can help you better isolate your units - and thus do better unit testing - but to my mind, mocking and DI are more advanced concepts that can interfere with the clarity of just writing your tests first.
Mocks, and DI, have their place; they're good tools to have in your toolbox. But they take some sophistication, a more advanced understanding, than the typical testing neophyte has. Writing your tests first, however, is exactly as simple as it sounds. So it's easier to take on, and it's powerful all by itself (without mocks and DI). You'll get earlier, easier wins by writing mock-free tests first, than by trying to begin with mocks, and TDD, and DI all at once.
Start with test-first; when you are very comfortable with it, and when your code is telling you you need mocks, then take on mocks.
I have worked for companies which take unit testing/integration testing too far and those that do too little so I like to think I have a good balance between the two.
I would recommend TDD - Test Driven Development. This ensures you have good coverage but it also keeps focusing your attention on the right place and problem.
So the first thing you do for every piece of new development is write a unit test - even if you don't have a single class to test.
Think about what you are testing. Now run the test. Why wouldn't it compile? Because you need classA. Create the class and run the test. Why doesn't it compile? Because it doesn't have methodA. Write method one and run unit test again. Why does the test fail? Because methodA isn't implemented. Implement methodA and run test. Why does it fail? Because methodA doesn't return the correct expected value...etc
You continue like this writing unit tests as you develop and then eventually the test will pass and the piece of functionality will be complete.
Extending on Steve Freeman's answer: Dave Astel's book is called "Test-driven Development - A practical guide". If the kind of application you're writing is a GUI application then this should be helpful. I read Kent Becks' book but I couldn't figure out how to start a project with TDD. Astel's book test-drives a complete non-trivial GUI application from start to finish using stories. It helped me a lot to acutally start with TDD, it showed me where and how to start.
Test driven development can be confusing for beginners, a lot of books that I read when I was learning TDD would teach you how to write Unit Tests for a Calculator class but there seems to be very less help for building real world apps, that are more data centric if I dare say. For me the breakthrough was when I understood what is Behaviour Driven Development or BDD and how to start doing testing from outside in. Now I can simply advice you to focus on your application behaviour and write unit tests to verify it. There is a lot of debate going on between TDD and BDD but I think that well written automated tests at every level add value and to write them we need to focus on behaviour.
Hadi Hariri has an excellent post here
http://hadihariri.com/2012/04/11/what-bdd-has-taught-me/
I have also written some articles on the topic that I feel will help in understanding all the concepts related to TDD here
http://codecooked.com/introduction-to-unit-testing-and-test-driven-development/
http://codecooked.com/different-types-of-tests-and-which-ones-to-use/
Read Pragmatic Unit Testing in C# with NUnit. It has comprehensive information about starting writing testes and structuring the code to make it more unit testing friendly.
If you haven't written unit tests before, then just pick some classes and begin to write your unit tests, and continue to work on developing more unit tests.
As you gain experience you can then begin to mock out the database for example, by using the Unity framework, but, I would suggest starting simply and gaining experience before making this leap.
Once you are comfortable with how to write unit tests, then you can try doing TDD.
I prefer KentBeck's approach which is nicely explained in the book, Test Driven Development by Example - Kent Beck.
from your question i can infer you are not sure with the test frame work - choosing the correct test frame work is very important for TDD or writing unit tests(in general).
Only practical problem with TDD is "Refactoring"(we need to refactor test code as well) takes lot of time.
I think Dave Astels' book is still one of the best introductions. It's for Java, but you should be able to translate.

How do you tell that your unit tests are correct?

I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem?
Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.
Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?
The unit test should express the "contract" of whatever you are testing. It's more or less the specification of the unit put into code. As such, given the specs, it should be more or less obvious whether the unit tests are "correct".
But I would not worry too much about the "correctness" of the unit tests. They are part of the software, and as such, they could well be incorrect as well. The point of unit tests - from my POV - is that they ensure the "contract" of your software is not broken by accident. That is what makes unit tests so valuable: You can dig around in the software, refactor some parts, change the algorithms in others, and your unit tests will tell you if you broke anything. Even incorrect unit tests will tell you that.
If there is a bug in your unit tests, you will find out - because the unit test fails while the tested code turns out to be correct. Well then, fix the unit test. No big deal.
Well, Dijkstra famously said:
"Testing shows the presence, not the
absence of bugs"
IOW, how would you write a unit test for the function add(int, int)?
IOW, it's a tough one.
There are two ways to help ensure the correctness of your unit tests:
TDD: Write the test first, then write the code it's meant to test. That means you get to see them fail. If you know that it detects at least some classes of bugs (such as "I haven't implemented any functionality in the function I want to test yet"), then you know that it's not completely useless. It may still let some other bugs slip past, but we know that the test is not completely incorrect.
Have lots of tests. If one test lets some bugs slip past, they'll most likely cause errors further down the line, causing other tests to fail. As you notice that, and fix the offending code, you get a chance to examine why the first test didn't catch the error as expected.
And finally, of course, keep the unit tests so simple that they're unlikely to contain bugs.
For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass. This happened to me recently, where I was checking that a given condition (a) caused a method to fail. The test passed (i.e. the method failed), but it passed because another condition (b) caused a failure. Write your tests carefully, and make sure that unit tests test ONE thing.
Generally though, tests cannot be written to prove code is bug free. They're a step in the right direction.
The complexity of the unit test code is (or should be) less (often orders of magnitude less) than the real code
The chance of your coding a bug in your unit test that exactly matches a bug in your real code is much less than just coding the bug in your real code (if you code a bug in your unit test that doesn't match a bug in your real code it should fail). Of course if you have made incorrect assumptions in your real code you are likely to make the same assumption again - although the mind set of unit testing should still reduce even that case
As already alluded to, when you write a unit test you have (or should have) a different mind set. When writing real code you're thinking "how do I solve this problem". When writing a unit test you're thinking, "how do I test every possibly way this could break"
As others have already said, it's not about whether you can prove that the unit tests are correct and complete (although that's almost certainly much easier with test code), as it is reducing the bug count to a very low number - and pushing it lower and lower.
Of course there has to come a point where your confident in your unit tests enough to rely on them - for example when doing refactorings. Reaching this point is usually just a case of experience and intuition (although there are code coverage tools that help).
I had this same question, and having read the comments, here's what I now think (with due credit to the previous answers):
I think the problem may be that we both took the ostensible purpose of unit tests -- to prove the code is correct -- and applied that purpose to the tests themselves. That's fine as far as it goes, except the purpose of unit tests is not to prove that the code is correct.
As with all nontrivial endeavors, you can never be 100% sure. The correct purpose of unit tests is to reduce bugs, not eliminate them. Most specifically, as others have noted, when you make changes later on that might accidentally break something. Unit tests are just one tool to reduce bugs, and certainly should not be the only one. Ideally you combine unit testing with code review and solid QA in order to reduce bugs to a tolerable level.
Unit tests are much simpler than your code; it's not possible to make your code as simple as a unit test if your code does anything significant. If you write "small, granular" code that's easy to prove correct, then your code will consist of a huge number of small functions, and you're still going to have to determine whether they all work correctly in concert.
Since unit tests are inevitably simpler than the code they're testing, they're less likely to have bugs. Even if some of your unit tests are buggy, overall they're still going to improve the quality of your main codebase. (If your unit tests are so buggy that this isn't true, then likely your main codebase is a steaming pile as well, and you're completely screwed. I think we're all assuming a basic level of competence.)
If you DID want to apply a second level of unit testing to prove your unit tests correct, you could do so, but it's subject to diminishing returns. To look at it faux-numerically:
Assume that unit testing reduces the number of production bugs by 50%. You then write meta-unit tests (unit tests to find bugs in the unit tests). Say that this finds problems with your unit tests, reducing the production bug rate to 40%. But it took 80% as long to write the meta-unit tests as it did to write the unit tests. For 80% of the effort you only got another 20% of the gain. Maybe writing meta-meta-unit tests gives you another 5 percentage points, but now again that took 80% of the time it took to write the meta-unit tests, so for 64% of the effort of writing the unit tests (which gave you 50%) you got another 5%. Even with substantially more liberal numbers, it's not an efficient way to spend your time.
In this scenario it's clear that going past the point of writing unit tests isn't worth the effort.
I guess writing the test first (before writing the code) is a pretty good way of being sure your test is valid.
Or you could write tests for your unit tests... :P
You don't tell. Generally, the tests will be simpler than the code they're testing, so the idea is simply that they'll be less likely to have bugs than the real code will.
First let me start by saying that unit testing is NOT only about testing. It is more about the design of the application. To see this in action you should put a camera with your display and record your coding while writing unit testing. You will realize that you are making a lot of design decisions when writing unit tests.
How to know if my unit tests are good?
You cannot test the logical part period! If your code is saying that 2+2 = 5 and your test is making sure that 2+2 = 5 then for you 2+2 is 5. To write good unit tests you MUST have good understanding of the domain you are working with. When you know what you are trying to accomplish you will write good tests and good code to accomplish it. If you have many unit tests and your assumptions are wrong then sooner or later you will find out your mistakes.
This is one of the advantages of TDD: the code acts as a test for the tests.
It is possible that you'll make equivalent errors, but it is uncommon in my experience.
But I have certainly had the case where I write a test that should fail only to have it pass, which told me my test was wrong.
When I was first learning unit testing, and before I was doing TDD, I would also deliberately break the code after writing the test to ensure that it failed as I expected. When I didn't I knew the test was broken.
I really like Bob Martin's description of this as being equivalent to double entry bookkeeping.
As above, the best way is to write the test before the actual code. Find real life examples of the code your testing also if applicable (mathematical formula or similar), and compare the unit test and expected output to that.
This is something that bugs everyone that uses unit tests. If I would have to give you a short answer I 'd tell you to always trust your unit tests. But I would say that this should be backed up with your previous experience:
Did you have any defects that were reported from manual testing and the unit test didn't catch (although it was responsible to) because there was a bug in your test?
Did you have false negatives in the past?
Are your unit tests simple enough?
Do you write them before new code or at least in parallel?
You can't prove tests are correct, and if you're trying to, you're Doing It Wrong.
Unit tests are a first screen - a smoke test - like all automated testing. They are primarily there to tell you if a change you make later on breaks stuff. They are not designed to be a proof of quality, even at 100% coverage.
The metric does make management feel better, though, and that is useful in itself sometimes!
Dominic mentioned that "For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass.". One technique you can use to see if this is a problem is mutation testing. It makes changes to your code, and see if it causes the unit tests to fail. If it doesn't, then it may indicate areas where the testing isn't 100% thorough.
Unit tests are your requirements concretized. I don't know about you but I like having the requirements specified before starting to code (TDD). By writing them and treating them like any other piece of your code you'll start to feel confident introducing new features without breaking old functionality. To ensure that all your code is needed and that the tests actually tests the code I use pitest (other variants for mutation testing exists for other languages). For me, untested code, is buggy code, however clean it may be.
If the test tests complex code and is complex itself I often write tests for my tests (example).
Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.
The idea of unit testing is to test the most granular things, then stack together tests to prove the larger case. If you're writing large tests, you lose a bit of the benefits there, although it's probably quicker to write larger tests.