Unit-Testing Acceptance-Tests? - unit-testing

I am currently making some Acceptance-Tests that will help drive the design of a program I am about to do. Everything seems fine except I've realized that the Acceptance-Tests are kinda complex, that is, although they are conceptually simple, they require quite a bit of tricky code to run. I'll need to make a couple of "helper" classes for my Acceptance-Tests.
My question is in how to develop them:
Make Unit-Tests of my Acceptance-Tests(this seems odd -- has anyone done anything like it?)
Make Unit-Tests for those help classes. After I have done all the code of those help classes, I can pass and start working on the real Unit-Tests of my System. When using this approach, where would you put the helper classes? In the tests' project or in the real project? They don't necessarily have dependencies on testing/mocking frameworks.
Any other idea?

A friend is very keen on the notion that acceptance tests tell you whether your code is broken, while unit tests tell you where it's broken; these are complementary and valuable bits of information. Acceptance tests are better at letting you know when you're done.
But to get done, you need all the pieces along the way; you need them to be working, and that's what unit tests are great at. Done test-first, they'll also lead you to better design (not just code that works). A good approach is to write a big-picture acceptance test, and say to yourself: "when this is passing, I'm done." Then work to make it pass, working TDD: write a small unit test for the next little bit of functionality you need to make the AT pass; write the code to make it pass; refactor; repeat. As you progress, run the AT from time to time; you will probably find it failing later and later in the test. And, as mentioned above, when it's passing, you're done.
I don't think unit testing the acceptance test itself makes much sense. But unit testing its helper classes - indeed, writing them test-first - is a very good way to go. You're likely to find some methods that you write "just for the test" working their way into the production code - and even if you don't, you still want to know that the code your ATs use is working right.
If your AT is simple enough, the old adage of "the test tests the code, and the code tests the test" is probably sufficient - when you have a failing test, it's either because the code is wrong or because the test is wrong, and it should be easy enough to figure out which. But when the test is complicated, it's good to have its underpinnings well tested, too.

If you think of software development like a car production plant, then the act of writing software is like developing a new car. Each component is tested separately because it's new. It's never been done before. (If it has, you can either get people who've done it before or buy it off the shelf.)
Your build system, which builds your software and also which tests it, is like the conveyor belt - the process which churns out car after car after car. Car manufacturers usually consider how they're going to automate production of new components and test their cars as part of creating new ones, and you can bet that they also test the machines which produce those cars.
So, yes, unit-testing your acceptance tests seems perfectly fine to me, especially if it helps you go faster and keep things easier to change.

There is nothing wrong with using unit test framework (like JUnit) to write acceptance tests (or integration tests). People don't like calling them 'unit tests' for many reasons. To me the main reason is that integration/acceptance tests won't run every time one checks in the changes (too long or/and no proper environment).
Your helper classes are rather standard thing that comprise "test infrastructure code". They don't belong anywhere else but test code. It's your choice to test them or not. But without them your tests won't be feasible in big systems.
So, your choice is #2 or no tests of tests at all. There is nothing wrong with refactoring infrastructure code to make it more transparent and simple.

Your option 2 is the way I'd do it: Write helper classes test-first - it sounds like you know what they should do.
The tests are tests, and although the helper classes are not strictly tests, they won't be referenced by your main code, just the tests, so they belong with the tests. Perhaps they could be in a separate package/namespace from regular tests.

Related

Testing unit tests' helper methods

As I am writing tests, some of them have a lot of logic in them. Most of this logic could easily be unit tested, which would provide a higher level of trust in the tests.
I can see a way to do this, which would be to create a class TestHelpers, to put in /classes, and write tests for TestHelpers along with the regular tests.
I could not find any opinion on such a practice on the web, probably because the keywords to the problem are tricky ("tests for tests").
I am wondering whether this sounds like good practice, whether people have already done this, whether there is any advice on that, whether this points to bad design, or something of the sort.
I am running into this while doing characterization tests. I know there are some frameworks for it, but I am writing it on my own, because it's not that complicated, and it gives me more clarity. Also, I can imagine that one can easily run into the same issue with unit tests.
To give an example, at some point I am testing a function that connects to Twitter's API service and retrieves some data. In order to test that the data is correct, I need to test whether it's a json encoded string, whether the structure matches twitter's data structure, whether each value has the correct type, etc. The function that does all these checks with the retrieved data would typically be interesting to test on its own.
Any idea or opinion on this practice ?
One of the aphorisms about TDD is that "the tests test the code, and the code tests the tests." That is, because of the red-green-refactor cycle, you see the code fail the test and then after making it work, you see it pass the test - and that alone is enough to give you pretty good confidence that the test (and all the test utility code it calls) works correctly. For characterization tests, you don't have this red-green-refactor cycle, so it may be of value for you to write tests for your test utility methods.
I think it's too much to test tests themselves. alfasin right, who would test tests for tests then? And it is no coincidence that you can't find much info on this topic. That's because it's just not a common practice. Usually, well-written tests should cover arrangement logic in tests themselves. But I understand your aspiration - how to be sure that test is "well-written"? The most dangerous thing here is to have passing test, that should normally fail(but passes due to bug in it). Having such a test is even worse than not having test at all. But to be honest, I did not have much such cases in practice. My advice is just to focus on writing good tests that cover all execution paths of your logic and I think you'll be fine :)
While it sounds perverse, I have on occasion written automated tests for some of my testing infrastructure, if it gets fairly complex. The tests that test this testing infrastructure then tend to be simple, so the "what about testing the tests for the test?" question becomes moot, in my experience.
Note that this is mainly occurring in a library designed explicitly to aid testing for other people (people writing Qt apps, in this case), though I have done it for some stand-alone apps before: for example, when writing tests for Kate's Vim mode's auto-completion integration, the fake auto-completer test-helper code used for mimicking the auto-completion for a variety of configurations got complex enough that I actually started developing it test-first.
And it's probably worth mentioning that e.g. Google Mock has hundreds of tests written for it :)

What is unit testing, and does it require code being written?

I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.

Are unit tests useful for real? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is Unit Testing worth the effort?
I know what is the purpose of unit tests generally speaking, but I found some things that bother me.
1)
Purpose of tests early bug discovery. So, in some of later iterations, if I make some changes in code, automated test have to alarm me and tell me that I've screwed up some long ago forgotten piece of my software.
But, say that I have class A and say that is interact with some others class' instance, call it class B.
When one writes unit test for class A, he has to mock class B.
So, in some future, if one makes some changes in class B, and that causes some bugs, they will reflect only in class B's unit test, not in A's ('cause A's test doesn't use real class B, but it's mock with fixed inputs and outputs).
So, I can't see how could unit test do early notifying of made bugs that one isn't aware of? I'm aware of possible bugs in class that I'm changing, I do not need unit test for it, I need test to alarm me of consequences of my changes that makes bug in some "forgotten" class, and that isn't possible with unit tests. Or am I wrong?
2)
When writing mocks and right expectations of calling methods, their inputs and returning values, one has to know how class-under-test is going to be implemented.
And I think that that contradicts with test driven development. In TDD one writes tests first, and, driven by them, one writes a code.
But I can't write right expectations (and tests in general) unless I write code that has to be tested. That contradicts with TDD, right?
Both of those problems could be solved if I used real objects instead of mocks. But that isn't unit testing then, right? In unit test class-under-test has to be isolated form the rest of the system, not using real classes but mocks.
I'm sure that I'm wrong somewhere, but I cannot find where. I've been reading and reading and I can't find what have I understood wrong.
I have just implemented TDD and unit testing for the first time. I think its great, and I have never used it in a contiguous integration environment, where I imagine its even more valuable.
My process:
Create the skeleton of class where business logic will go (be it a
serivce layer, or domain object, etc).
Write the test, and define the
method names and required functionaility in the test - which then prompts my
ide to write the skelton method (a small but nice plus)
Then write the actual class methods being tested.
Run tests, debug.
I can now no longer write code without first writing its test, it really does aid develop. You catch bugs straight away, and it really helps you mentally order your code and development in a structured simple manner. And of course, any code that breaks existing functionality is caught straightaway.
I have not needed to use mock objects yet (using spring I can get away with calling service layer, and controller methods directly).
Strictly speaking, you spend more time in writing unit test rather than concentrating on the actual code.
there is no way you can achieve 100% code coverage. I do unit test my application, but I feel its not worth it. But not every one agrees with it.
If you change a piece of code, your unit test fail and figure out why its failing but for me its not worth it.
I am not big fan of Unit testing... You mock everything but mocking is big task..........
But now a days companies are trying for TDD approach.
Simple answer: Yes, they are useful for real.
TDD is nice and all in theory but I've hardly ever seen seen it done in practice. Unittests are not bound to tdd, however. You can always write unittests for code that works to make sure it still works after changes. This usage of unittests have saved me countless hours of bugfixing because the tests immediately pointed out what went wrong.
I tend to avoid mocks. In most cases, they're not worth it in my opinion. Not using mocks might make my unittest a little more of an integrationtest but it gets the job done. To make sure, your classes work as designed.
Unit Tests itself aren't stupid, it depends of your project.
I think Unit Tests are good, not just in TDD development but in any kind of project. The real pro for me is that if you wrap crucial code in your project with Unit Tests, there's a way to know if what you meant with it is still working or not.
For me, the real problem is that if someone is messing your code, there should be a way for him/her to know if the tests are still succedding, but not by making them run manually. For instance, lets take Eclipse + Java + Maven example. If you use Unit tests on your Maven Java Project, each time anyone build it, the tests run automatically. So, if someone messed your code, the next time he build it, he'll get a "BUILD FAILED" Console error, pointing that some Unit Tests failed.
So my point is: Unit Tests are good, but people should know that they're screwing things up without running tests each time they change code.
0) No, it doesn't. Sound stupid, I mean. It is a perfectly valid question.
The sad fact is that ours is an industry riddled with silver bullets and slick with snake oil, and if something doesn't make sense then you are right to question it. Any technique, any approach to any part of engineering is only ever applicable in certain circumstances. These are typically poorly documented at best, and so it all tends to deteriorate into utterly pointless my-method-is-better-than-yours arguments.
I have had successes and failures with unit testing, and in my experience it can be very useful indeed, when correctly applied.
It is not a good technique when you're doing a massive from-scratch development effort, because everything is changing too rapidly for the unit tests to keep up. It is an excellent technique when in a maintenance phase, because you're changing a small part of a larger whole which still needs to work the way it's supposed to.
To me, unit testing is not about test-driven development but design-by-contract: the unit test checks that the interface has not been violated. From this, it's pretty easy to see that the same person should not be writing a unit and its corresponding unit test, so for very small-scale stuff, like my own pet projects, I never use it. In larger projects, I advocate it - but only if design-by-contract is employed and the project recognizes interface specifications as important artefacts.
Unit testing as a technique is also sensitive to the unit size. It is not necessarily the case that a class is testable on its own, and in my experience the class is generally too small for unit testing. You don't want to test a class per se, you want to test a unit of functionality. Individually deployable (installable) binaries, eg jars or DLLs, make better candidates in my book, or packages if you're in a Java-style language.

TDD. test first anyway?

Are you doing test first anyway? Or in some cases you are doing some coding and then writing your tests to make sure code works? As for me I prefer to create a class. Sure, during class creation I think about its interface and how to test the class. But I dont write testing code first. Do you write it first? Do you think you should always write test code first?
I'm not a purist in this matter (TDD involves more than just writing the tests first, it's also about initially writing very minimal, "hard coded" tests and refactoring them a lot -- see The Book by The Master himself).
I tend to test-first when I'm doing incremental development to add a feature to an existing module, and I insist on test-first when the incremental development I'm doing is to fix a bug (in the latter case I absolutely want a unit-test AND an integration-test that both reproduce the bug, before I fix the code that caused the bug).
I tend to be laxer when I'm doing "greenfield" development, especially if that's of an exploratory, "let's see what we can do here that's useful", nature -- which does happen, e.g. in data mining and the like -- you have a somewhat vague idea that there might be a useful signal buried in the data, some hypothesis about its possible nature and smart ways to [maybe] extract it -- the tests won't help until the exploration has progressed quite a bit.
And, once I start feeling happy with what I've got, and thus start writing tests, I don't necessarily have to redo the "exploratory" code from scratch (as I keep it clean and usable as I go, not too hard to do especially in Python, but also in R and other flexible languages).
Test-driven development, by definition, is writing your tests first. If you create your class first, the subsequent tests you write can be called Unit Tests, but it is not TDD.
There are many who say that writing your tests first improves code quality. I am inclined to agree, provided there is some effort put into the software design beyond just writing tests and making them pass.
If you are refactoring an existing legacy system, it is a good idea to wrap the functionality of that system in a suite of tests prior to refactoring. That way, you know if your code changes break something.
In my humble opinion, Test Driven Development is something more than writing unit tests first. Test first TDD is more about putting yourself in the mindset of thinking about what exactly it is you are trying to achieve with the code you are about to write.
What should your acceptance criteria be?
When will your code be 'done' and how is done defined?
Writing unit tests before code is only one of the ways to formalize such acceptance criteria. The biggest benefit in my view of Test first TDD is that you give a good hard think and document (in unit tests, on paper, on the white board) what the acceptance criteria are for the feature/story you are implementing. Having such documentation also helps define the scope of done.
So, whether you code a function first and then write the unit test for it or you decide to first write the test and then code the function is of secondary nature. Once you've thought out and documented your acceptance criteria, your benefit will be that your target is clearer and you're more likely to focus on fulfilling the acceptance criteria minimizing any 'noise' (e.g. feature x would be nice to add, am I 'done' yet).
Of course this does not mean that we go ahead and write code leaving it untested and attempt to retrofit unit tests when we've already coded 4 classes for example! I just think that there is no need to be 100% dogmatic in writing unit tests before actual code as the benefit in TDD lies elsewhere, but in any case our unit tests should keep up with our growing code base.
Lets be clear here, that TDD was designed for production code. If you want to follow the rules for TDD for production code, then the first thing you write is your first test.
I write my first test (before writing a line of production code), which may want to instantiate a new class and running it produces some sort of error, which I fix by writing the smallest amount of production code to get the test running.
If you want to do something else, then you can make up your own rules: You can write whatever code and/or tests you like to explore any designs or patterns or anything.
However, if you then want to write production code based on what you have learned from your experiments, you may well be better placed - but technically you should really throw away your experiments - to write your first test.

How to start unit testing or TDD?

I read a lot of posts that convinced me I should start writing unit test, I also started to use dependency injection (Unity) for the sake of easier mocking, but I'm still not quite sure on what stage I should start writing the unit tests and mocks, and how or where to start.
Would the preferred way be writing the unit tests before the methods as described in the TDD methodology?
Is there any different methodology or manner for unit testing?
Test first / test after:
It should be noted that 'test first' as part of TDD is just as much (if not more) to do with design as it is to do with unit testing. It's a software development technique in its own right -- writing the tests results in a constant refining of the design.
On a separate note: If there is one significant advantage to TDD from a pure unit testing perspective, it is that it is much harder (though not impossible) to write a test that's wrong when doing TDD. If you write the test beforehand, it should always fail because the logic required to make the test pass does not yet exist. If you write the test afterwards, the logic should be there, but if the test is bugged or is testing the wrong thing, it may pass regardless.
I.e. if you write a bad test before, you may get a green light when you expect a red (so you know the test is bad). If you write a bad test afterwards, you will get a green light when you expected a green (unaware of the bad test).
Books
The pragmatic unit testing book is well worth a look, as is Roy Osherove's "The Art of Unit Testing". The pragmatic book is more narrowly focussed on the different types of test inputs you can try to find bugs, whereas TAOUT covers a wider spread of topics such as test doubles, strategies, maintainability etc. Either book is good; it depends what you want from it.
Also, here's a link to a talk Roy Osherove did on unit testing. It's worth a watch (so are some of the test review videos he recorded, as he points out various problems and dos/don'ts along with reasons why).
How to start
There's nothing better than writing code. Find a fairly simple class that doesn't reference much else. Then, start writing some tests.
Always ask yourself "what do I want to try and prove with this test?" before you write it, then give it a decent name (usually involving the method being called, the scenario and the expected result, e.g. on a stack: "Pop WhenStackIsEmpty ThrowsException").
Think of all the inputs you can throw at it, different combinations of methods that may yield interesting results and so forth.
If you are curious about unit testing the best way to learn it is try it. You will probably start writing integration tests at first, but that is fine. When they seem too difficult to maintain or too much work to write, read more about what kind of tests there are (unit, functional, integration) and try to learn the difference.
If you are interested in starting with TDD, Uncle Bob is a good source. Particalulary this.
More on unit testing
Ensure that you get consistent test results. Running the same test repeatedly should return the same results consistently.
The tests should not require configuration.
Testing order should not matter. This means partial test runs can work correctly. Also, if you keep this design philosophy in mind, it will likely aid in your test creation.
Remember that any testing is better than no testing. The difficulty can be found in writing good clean unit tests that promote ease of creation and maintenance. The more difficult the testing framework, the more opposition there will be to employing it. Testing is your friend.
In C# and with visual studio I find following procedure very helpful:
Think! Make a small upfront design. You need to have a clear picture what classes you need and how objects should relate with each other. Concentrate only on one class/object (the class/object you want to implement) and one relationship. Otherwise you end up with a too heavyweight design. I often end up with multiple sketches (only a few boxes and arrows) on a spare sheet of paper.
Create the class in the production code and name it appropriately.
Pick one behaviour of the class you want to implement and create a method stub for it. With visual studio creating empty method stubs is a piece of cake.
Write a test for it. Therefor you will need to initialize that object, call the method and make an assert to verify the result. In the easiest case the assertion requires another method stub or a property in the production code.
Compile and let the test runner show you the red bar!
Code the required behavior in the production code to see the green bar appear.
Move to the next behaviour.
For this procedure two things are very important:
You need a clear picture what you want and how the class/object should look like. At least spend some time one it.
Think about behaviours of the class/object. This will make the tests and the production code behaviour-centric, which is a very natural approach to think about classes/objects.
Test first or not test first?
I find it usually harder to retrofitting tests to existing production code. In most cases this is due to tangled dependencies to other objects, when the object which is under test needs to be initialized. TDD usually avoids this, because you want to initialize the object in the test cases with less effort as possible. This will result to a very loose coupling.
When I retrofit tests, the must cumbersome job is the task of initializing the object under test. Also the assertions may be a lot of work because of tangled dependencies. For this you will need to change the production code and break dependencies. By using dependency injection properly this should not be an issue.
Yes, the preferred way of doing TDD is to write the test first (as implied by the name Test-Driven Development). When you start out with TDD it can be hard to know where to start writing the tests, so I would suggest to be pragmatic about it. After all, the most important part of our job is about delivering working code, not so much how the code was crafted.
So you can start by writing tests for existing code. Once you get a hang of how the unit tests are structured, which ones that seem to do a good job and which ones that seem not so god, then you will find it easier to dive more into the test-first approach. I have found that I write tests first to a greater extent as time goes by. It simply becomes more natural with increased experience.
Steve Sanderson has a great writeup on TDD best practices.
http://feeds.codeville.net/~r/SteveCodeville/~3/DWmOM3O0M2s/
Also, there's a great set of tutorials for doing an ASP.net mvc project that discusses a lot TDD principles (if you don't mind learning ASP.net MVC along the way)
http://www.asp.net/learn/mvc-videos/ Look for the "Storefront" series at the bottom of the page.
MOQ seems to be the hot mocking framework lately, you may want to look into that as well
In summary, try to write a test to validate something you'r trying to archive, then implement the code to make it work.
The pattern is known as Red - Green - Refactor. Also do your best to minimize dependencies so that your tests can focus on one component.
Personally, I use Visual Studio Unit Tests. I'm not a hardcore TDD developer, but what i like to do is this:
Create a new project and define a few of the fundamental classes based on the system design (that way I can at least get some intellisense)
create a unit tests project and start writing unit tests to satisfy the functionality i'm trying to implement.
Make them fail
Make them pass (implement)
Refactor
Repeat, try to make the test more stringent or create more tests until i feel its solid.
I also feel its very useful to add functionality onto an exiting code base. If you want to add some new feature, first create the unit test for what you want to add, step through the code to see what you have to change, then go through the TDD process.
Choose a small non-critical application and implement it using TDD. At first the new way of thinking will feel weird, but maybe after a week or two practice it fill feel natural.
Here is a tutorial application (in branch "tutorial") that shows what kinds of tests to write. In that tutorial you write code to pass the predefined test cases, so that you get into the rhythm, and later you then write your own tests. The README file contains instructions. It's written in Java, but you can easily adapt it to C#.
I would take on TDD, test-first development, before mocks and dependency injection. To be sure, mocks can help you better isolate your units - and thus do better unit testing - but to my mind, mocking and DI are more advanced concepts that can interfere with the clarity of just writing your tests first.
Mocks, and DI, have their place; they're good tools to have in your toolbox. But they take some sophistication, a more advanced understanding, than the typical testing neophyte has. Writing your tests first, however, is exactly as simple as it sounds. So it's easier to take on, and it's powerful all by itself (without mocks and DI). You'll get earlier, easier wins by writing mock-free tests first, than by trying to begin with mocks, and TDD, and DI all at once.
Start with test-first; when you are very comfortable with it, and when your code is telling you you need mocks, then take on mocks.
I have worked for companies which take unit testing/integration testing too far and those that do too little so I like to think I have a good balance between the two.
I would recommend TDD - Test Driven Development. This ensures you have good coverage but it also keeps focusing your attention on the right place and problem.
So the first thing you do for every piece of new development is write a unit test - even if you don't have a single class to test.
Think about what you are testing. Now run the test. Why wouldn't it compile? Because you need classA. Create the class and run the test. Why doesn't it compile? Because it doesn't have methodA. Write method one and run unit test again. Why does the test fail? Because methodA isn't implemented. Implement methodA and run test. Why does it fail? Because methodA doesn't return the correct expected value...etc
You continue like this writing unit tests as you develop and then eventually the test will pass and the piece of functionality will be complete.
Extending on Steve Freeman's answer: Dave Astel's book is called "Test-driven Development - A practical guide". If the kind of application you're writing is a GUI application then this should be helpful. I read Kent Becks' book but I couldn't figure out how to start a project with TDD. Astel's book test-drives a complete non-trivial GUI application from start to finish using stories. It helped me a lot to acutally start with TDD, it showed me where and how to start.
Test driven development can be confusing for beginners, a lot of books that I read when I was learning TDD would teach you how to write Unit Tests for a Calculator class but there seems to be very less help for building real world apps, that are more data centric if I dare say. For me the breakthrough was when I understood what is Behaviour Driven Development or BDD and how to start doing testing from outside in. Now I can simply advice you to focus on your application behaviour and write unit tests to verify it. There is a lot of debate going on between TDD and BDD but I think that well written automated tests at every level add value and to write them we need to focus on behaviour.
Hadi Hariri has an excellent post here
http://hadihariri.com/2012/04/11/what-bdd-has-taught-me/
I have also written some articles on the topic that I feel will help in understanding all the concepts related to TDD here
http://codecooked.com/introduction-to-unit-testing-and-test-driven-development/
http://codecooked.com/different-types-of-tests-and-which-ones-to-use/
Read Pragmatic Unit Testing in C# with NUnit. It has comprehensive information about starting writing testes and structuring the code to make it more unit testing friendly.
If you haven't written unit tests before, then just pick some classes and begin to write your unit tests, and continue to work on developing more unit tests.
As you gain experience you can then begin to mock out the database for example, by using the Unity framework, but, I would suggest starting simply and gaining experience before making this leap.
Once you are comfortable with how to write unit tests, then you can try doing TDD.
I prefer KentBeck's approach which is nicely explained in the book, Test Driven Development by Example - Kent Beck.
from your question i can infer you are not sure with the test frame work - choosing the correct test frame work is very important for TDD or writing unit tests(in general).
Only practical problem with TDD is "Refactoring"(we need to refactor test code as well) takes lot of time.
I think Dave Astels' book is still one of the best introductions. It's for Java, but you should be able to translate.