Is it wrong to favour integration testing over unit testing? - unit-testing

We're running a project on which we started adopting test-driven design long after the development was started.
We have both unit tests and integration tests. Integration tests are run on a real database, initialized in a known state before the tests are run.
As we write tests, we start noticing than even for classes that could be tested in the "standard way", in isolation & with mock objects, it has actually become faster, and cleaner (read: shorter & easier to understand code) to just use real objects/services directly talking to the database, rather than cluttering the test class with complicated mock objects setup.
Is there anything wrong with this approach?

Nothing wrong with it, at all. On the contrary, based on my experience I would say that it's wrong to favour unit testing. The perception your team had that tests "become faster, and cleaner" is the same I had many years ago and since then.
I would even suggest that you drop unit tests altogether, and continue investing in integration tests. I say this as someone who develops a testing library with a very sophisticated mocking API, who gave isolated unit tests the benefit of the doubt for years, and then finally came to the conclusion that integration tests (after having written many tests of both kinds) are so much better.
One correction, though: unit tests "in isolation & with mock objects" are not the "standard way". As you probably know, Kent Beck is the "father" of TDD and the inventor of JUnit. But guess what, Kent did not use mocks at all in his TDD unit tests. Strictly speaking, the "unit" tests written by the guy who invented TDD are closer to integration tests, as they make no effort of isolating the tested unit from its dependencies. (This is a common misunderstanding about unit testing. For an accurate definition see this article by Martin Fowler.)

If you are using real objects/services, then you don't know why your tests failed. E.g. you changed something in service implementation, or renamed database field, or there your network is down - you have 50 failed "unit" tests. Also you can't do test-driven development in this case, because dependencies which will be required by class you are testing should be implemented before you writing test. Can you foresee which API will be required by consumers of your service? That will lead to less usable API and code which not used at all (parameters, methods, etc).
If your tests take a lot of efforts to arrange mocks, then:
tests should be short and simple
don't verify what other tests already verified
dependencies should be easy to interact with
try to follow Single Responsibility Principle for your classes (both dependencies and sut)
try to follow Tell Don't Ask Principle - instead of asking many questions to dependency, tell it what you want

Related

Do I need NUnit now that I've migrated all my unit tests to MSpec?

I was doing TDD using NUnit. I was naming my NUnit tests in a behavioral style (like given, when, then). However I am now using MSpec for all my unit tests. I'm still writing tests first, using mocks, etc... so, they're still unit tests. But, I don't see a need for NUnit.
I am nervous to throw away all the effort I put into learning NUnit. Should I abandon TDD/NUnit completely, taking into consideration that BDD is TDD done right?
Now that you have embraced BDD you are following an "Outside-In" development approach.
A nice succinct definition of this development technique can be found at programmers.stackexchange.com. I quote:
"Outside-In (London school, top-down or "mockist TDD" as Martin Fowler
would call it): you know about the interactions and collaborators
upfront (especially those at top levels) and start there (top level),
mocking necessary dependencies. With every finished component, you
move to the previously mocked collaborators and start with TDD again
there, creating actual implementations (which, even though used, were
not needed before thanks to abstractions). Note that outside-in
approach goes well with YAGNI principle."
When using BDD, you develop in a top-down manner and mock dependencies to satisfy your test. Once your BDD test passes you then revert to using TDD to implement concrete versions of the dependencies you encountered during your BDD test (using an "Inside-Out" approach).
Hence both your TDD and BDD tests are valuable, as they test different aspects of your code i.e. the BDD tests ensure that a user's interaction is tested against all of the layers in your system, whilst the TDD tests cover the individual components in detail and in isolation (via mocking).
So don't abandon your NUnit tests!
To end my answer, you say that:
BDD is TDD done right
As I've explained above, the major difference between BDD and TDD is the scope of code which they cover. Dan North has a good article on this here.
NUnit and MSpec are, basically, test frameworks. They can both be used to write unit, integration, or acceptance tests. You implement the test at the right intersection of layers, behaviors, or whatever your definition is. Both frameworks support BDD-style and naming. MSpec does it up front with the custom delegates. NUnit makes it a little more challenging (you have to fiddle with constructors and setup & test methods).
You're still writing tests first (TDD), but now you're using a test framework that directly supports context/specification-grammar and behavioral testing (BDD) vs. object-structure testing.
The question isn't really about TDD vs. BDD, Arrange-Act-Assert grammar vs. context/specification-grammar, or any of the other structural differences in the test framework (one setup per context, one assertion per spec, etc), but of your skills with a particular framework!
I say, embrace your new knowledge! Do you like mspec? Are you likely to engage your colleagues to switch to mspec? Will you completely forget your NUnit skills (the API or the command-line runner)?
If you inherit some old projects or have team-members who like NUnit, the two frameworks can exist side-by-side in your solution and in your build script with little trouble. It's just not great to have many different ways to write tests and report results.
From my experience there are some cases where NUnit is still a good choice. For example, mspec currently does not support examples, whereas NUnit has TestCase and TestCaseSource. These are useful in Unit Testing scenarios, so there might still be a use for xUnit style tools. No need to "forget" anything, I think it good to be aware of all the tools in your toolbelt and choose the right one for the task at hand.

Is test driven development a form of unit testing

Our company is in the process of improving the code quality and processes to adopt when delivering a piece of code. My question is concerned to unit testing and I wanted to gather information on the processes you adopt when you are asked to implement a functionality.
Is TDD a form of unit test. From what i understand in TDD, you write your test first (which fails), write your code and then run your test which should pass. It may be that the code will make external method call. But how are we suppose to know about the stubbing required when we are writing our test first?
When you are building your application prior release, what kind of test do you include in the build? Does the build run your integration test or does it run only your unit test?
Apart from TDD, do you write any other kind of test. Sorry if the question are slightly distorted. Your experience on how you undertake development is highly appreciated. Thanks
TDD can be a whole lot more than Unit Testing - so I'd say that Unit Testing is just a part of TDD. The methodology as a whole I think can include creating tests (expressing expectation/requirement of correct behaviour) on the result of any process in the software development. Be that writing code, build scripts, deployment scripts, database scripts, data import/export/transformation... whatever you need to do you should ask yourself, "How can I prove this has worked? Can I automate a test for that?"
As an example: something that is often overlooked because it falls out of scope of Unit Testing but is a very valid test, and one that is important to front-load in the development process is deployment.
If a software development cannot be easily deployed to the production environment without significant effort and change (to the software or environment architecture) it is important to know this up front, rather than a week before it has to go live. Once you have that process nailed, wouldn't it be nice to have a way of testing to make sure that it was correctly deployed?
When you understand that process - why not script and automate it? If you know the requirement is that it must be deployed, why not write a test for that before even doing it?
I've said it before but I'll say it again - the best resource I've found on the subject is Growing Object-Oriented Software, Guided by Tests - which is part of the Kent Beck Signature Series.
TDD is not about testing. TDD uses tests to drive the design of your code. TDD produces tests as a happy side-effect of designing your code by writing the tests first, but it's not about testing: it isn't a testing methodology and the purpose is not to produce tests.
Is test driven development a form of unit testing?
No. It is a design methodology.
From what I understand in TDD, you write your test first (which
fails), write your code and then run your test which should pass.
You're missing a very important step. You write your test first, you write your code until your test passes - and then you refactor. The tests permit you to refactor safely, ensuring that the desired behavior continues to work while you adjust your design. The tests also guide you to testable code, promoting smaller methods, shorter parameter lists, and overall much simpler design than other methodologies lead you to.
Apart from TDD, do you write any other kind of test?
When I do, it's usually a sign that I've failed to do TDD properly (but it certainly happens). We have both unit tests and user acceptance tests; both can be written prior to code, but sometimes our user acceptance tests are written later in the development cycle. They shouldn't be, but sometimes they are.
TDD is about design during the 5 minutes or so of your original Red-Green-Refactor loop. But it's arguably about testing forever after since there is nothing left to design - your TDD tests then become part of a perfect test harness to detect regressions caused by further developments. So yes, I guess you could say test driven development is a form of unit testing :)
But how are we suppose to know about the stubbing required when we are
writing our test first?
TDD often requires a (quick) prior modelling session where you flesh out the big picture classes your SUT will collaborate with.
However you need not go into the details of how these collaborators work. With mocks you basically apply wishful thinking that their implementations will behave correctly when you have TDD'd them at some point later, so for now you can just concentrate on the SUT.
When you are building your application prior release, what kind of
test do you include in the build? Does the build run your integration
test or does it run only your unit test?
When you practice Continuous Integration, your unit tests are supposed to be run each time so you can theoretically take any (non-failed) build and use it as a release build.
However, you may want to run automated or manual integration/acceptance tests as well before releasing your version. GUIs for instance, are usually not easily unit testable so acceptance/integration testing is a good way to track bugs in them.
You have several questions here, ill try to address them in a logical order
Is TDD a form of unit testing?
Id say "yes", in the sense it creates unit tests, even if it isnt the only benefit of using TDD. On the topic stressed by commentators, but not mentioned in your question: TDD not only ensures test coverage and documentiation (good tests are one of the best form of low level code documentation). Using TDD forces you to make certain design decisions, usually improving the overall app design.
Do You write other tests?
Well, I don't write any other unit tests. The point of TDD is the development of the code parallel to the development of the tests. By writing software in a cycle - single test, only enough code to pass it, you're sure that your tests document all the functionality and behaviour you require from your code and you make sure that the code is testable (you have to write it that way doing TDD). There should be no need for additional unit tests
There are other kinds of tests that you should use tho. Integration tests come to mind first, but there are other, like acceptance tests. If you have those automated, you will have it easier on you. Its not you who should be writing acceptance tests - it should be your customer/stakeholder, and You should be helping him on the technical part of writing them. You may be interested in Fitnesse http://fitnesse.org/ - its a tool that helps non-technical people build acceptance tests.
About the stubbing?
Its kind of difficult to discuss this without concrete examples. All i can say right now is - just write the code one test at a time. If you do so, there are chances you wont encounter a situation where you have a complicated class and think about how to stub around its complex dependencies.
What tests should be included in the build?
Id say - all of them, if it is possible!

TDD: Unit testing focus

Could TDD be oriented to another kind of testing different from unit testing?
While that might be possible under some interpretation of TDD, I think the main point of TDD is to write the tests before any production code. Given that, you won't have a large system to write integration or functional tests for, so the testing is necessarily going to be on the unit level.
Behavior-Driven Development (BDD) applies the ideas of TDD at the integration testing and functional testing level.
The red-green-refactor cycle of TDD is supposed to be quick, really quick. Fast feedback keeps you in the groove. I've seen approaches to TDD that take a full story, express it as a test, then drive development to pass that (large-ish) test. It's nominally TDD (or maybe BDD), but it doesn't feel right to me. Tiny steps, unit tests, is how I learned TDD, how I think of it, and how it works best for me.
Technically TDD is a way of doing things, not just about unit testing, in theory it should drive all the development process.
In theory the philosophy is that testing drives development, for a more complex scenario, like integration between systems, you should define the integration test, then code to pass those integration tests (even if the test are not automated)...
Of course YES. TDD relies on automated tests which is an orthogonal concern to the 'type' of tests.
If you concentrate on idea, not technical realization, than yes. What I'm saying is if,just for a moment, you forget about unit testing, and focus on idea of writing tests first, before writing implementation in order to achieve clearer design than it can be done even on system level.
Imagine this, you have some requirements. Based on that you write User Acceptance Testing tests - tests on high level that capture functionality. Next you start development - you already have use cases in form of UAT test. You know exactly what is expected, so it is easier to implement desired functionality.
Other example is project based on scrum. In planning meeting you discuss/create/have user stories that are later developed during sprint. Those user stories can actually be UAT tests.
Anyway way I see TDD as way of specifying design upfront, not application testing cycle/phase/methodology. Reason why TDD is perceived as synonym for unit testing is that unit tests are as close to developer as possible. They seem natural way for developer to express functional design of a class/method.
Certainly! TDD does not require unit tests, not at all. Unfortunately, this seems to be a common misunderstanding.
For a concrete example, I drive the development of an open source mocking library of mine (for Java) entirely with integration tests. I don't write separate unit tests for internal classes. Instead, for every new feature or enhancement I first add a failing acceptance (integration) test, and then change or add to existing production code until the test passes. With the eventual refactoring step, this is pure TDD, even if no unit tests get written.

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

What are key points to explain Unit Testing

I want to introduce Unit Testing to some colleagues that have no or little experience with Unit Testing. I'll start with a presentation of about an hour to explain the concept and give lots of examples. I'll follow up with pair programming sessions and code reviews.
What are the key points that should be focussed on at the intrduction?
To keep it really short: Unit testing is about two things
a tool for verifying intentions
a necessary safety net for refactoring
Obviously, it is a lot more than that, but to me that pretty much the sums it up.
Unit tests test small things
Another thing to remember is that unit tests test small things, "units". So if your test runs against a resource like a live server or a database, most people call that a system or integration test. To unit test just the code that talks to a resource like that, people often use mock objects (often called mocks).
Unit tests should run fast and be run often
When unit tests test small things, the tests run fast. That's a good thing. Frequently running unit tests helps you catch problems soon after the occur. The ultimate in frequently running unit tests is having them automated as part of continuous integration.
Unit tests work best when coverage is high
People have different views as to whether 100% unit test coverage is desirable. I'm of the belief that high coverage is good, but that there's a point of diminishing return. As a very rough rule of thumb, I would be happy with a code base that had 85% coverage with good unit tests.
Unit tests aren't a substitute for other types of tests
As important as unit tests are, other types of testing, like integration tests, acceptance tests, and others can also be considered parts of a well-tested system.
Unit testing existing code poses special challenges
If you're looking to add unit tests to existing code, you may want to look at Working Effectively with Legacy Code by Michael Feathers. Code that wasn't designed with testing in mind may have characteristics that make testing difficult and Feathers writes about ways of carefully refactoring code to make it easier to test. And when you're familiar with certain patterns that make testing code difficult, you and your team can write code that tries to avoid/minimize those patterns.
You might get some inspiration here too https://stackoverflow.com/questions/581589/introducing-unit-testing-to-a-wary-team/581610#581610
Remember to point out that Unit Testing is not a silver bullet and shouldn't replace other forms of traditional testing (Functional Tests etc) but should be used in conjunction.
Unit testing works better in some areas than others, so the only way to have truly comprehensive testing is to combine it with other forms.
This seems to be one of the biggest criticisms I see of Unit Testing as a lot of people don't seem to 'get' that it shouldn't be replacing other forms of testing in totality.
Main points:
unit tests help both design (by expressing intent) and regression test (by never going away) code;
unit tests are for lazy programmers who don't want to debug their code again;
tests have no business of influencing or affecting business logic and functionality, but they do test it fully;
unit tests demand the same qualities as regular code: theory and strategy, organization, patterns, smells, refactoring;
Unit tests should be FAIR.
F Fast
A Can be easily Automated
I Can be run Independently
R Repeatable