How to do unit testing [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a method CreateTask(UserId).
For this method, is it enough to check UserId against null or empty and an invalid value?
Or should I check whether Task is created for a specific UserId?
And should I also check number of tasks created and their date and time?

I don't think there's enough information here to answer this. But ti address some of your points:
For this method, Is that enough to check UserId null or empty and invalid Id ?
The method itself can internally do that, but that's not part of testing. That's just a method at runtime doing some error checking. This is often referred to as "defensive programming."
Or Should I check whether Task is created for the specif UserId.?
This is where it gets cloudy. And this is where you would want to step back for a moment and look at the bigger picture. Make sure you're not tightly coupling your unit tests with your implementation logic. The tests should validate the business logic, unaware of the implementation.
It's highly likely that "creating a task" isn't business logic, but rather simply an implementation detail. What you should be testing is that when Step A is performed, Result B is observed. How the system goes about producing Result B is essentially what's being tested, but not directly or explicitly.
A big reason for keeping your unit tests high-level like this is because if the implementation details change then you don't want to have to change your tests with them. That drastically reduces the value of those tests because it not only adds more work to any change but it eliminates the tests as the validation point of those change, since the tests themselves also change. The tests should be fairly simple and static, acting as a set of rules used to validate the code. If the tests are complex and often changing, they lose that level of confidence needed to validate the code.
You don't have to test every method. You should test every observable business action that the system performs. Methods which perform those actions get tested as a result of this. Methods which don't perform those actions are then questionable as to whether or not you need them in the first place. A code coverage tool is great for determining this.
For example, let's say you have MethodA() which doesn't get used by any of the tests. No test calls it directly, because it's just an implementation detail and the tests don't need to know about it. (In this case it might even make sense for the method to be private or in some other way obscured from the external calling code.) This leaves you with two options:
The tests are incomplete, because MethodA() is needed by a business process which isn't being tested. Add tests for that business process.
The tests are complete, and MethodA() isn't actually needed by the system. Remove it.
If your tests blindly test every method regardless of the bigger picture of the business logic, you'd never be able to determine when something isn't needed by the system. And deprecating code which is no longer needed is a huge part of keeping a simple and maintainable codebase.

1) Keep your methods short and simple, so unit testing will be easy (btw. one of reasons that TDD encourages good design)
2) Check all boundary conditions (invalid input, trivial input etc.) (btw. one of ways to make TDD easy)
3) Check all possible scenarios to achieve high coverage (all possible execution flows through your method with simplest input to achieve this) (btw. one of reasons that TDD works)
4) Check few (maybe one) typical scenarios with real data that demonstrates typical usage of the method
And as you've probably noticed - consider using TDD so you won't have to worry about the issue of "testing an existing method" :)

Related

Are there things that you can't test when applying TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been applying TDD to some new projects to get the hang of things, and I understand the basic flow: Write tests that fail, write code to pass tests, refactor if needed, repeat.
To keep the tests fast and decoupled, I abstract out things like network requests and file I/O. These are usually abstracted into interfaces which get passed through using dependency injection.
Usually development is very smooth until the end where I realize I need to implement these abstract interfaces. The whole point of abstracting them was to make it easily testable, but following TDD I would need to write a test before writing the implementation code, correct?
For example, I was looking at the tdd-tetris-tutorial https://github.com/luontola/tdd-tetris-tutorial/tree/tutorial/src/test/java/tetris. If I wanted to add the ability to play with a keyboard, I would abstract away basic controls into methods inside a Controller class, like "rotateBlock", "moveLeft", etc. that could be tested.
But at the end I would need to add some logic to detect keystrokes from the keyboard when implementing a controller class. How would one write a test to implement that?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
I use a slightly different spelling: not all things can be tested at the same level of cost effectiveness.
The "trick", so to speak, is to divide your code into two categoies: code that is easy to test, and code that is so obvious that you don't need to test it -- or not as often.
The nice thing about simple adapters is that (once you've got them working at all) they don't generally need to change very much. All of the logic lives somewhere else and that somewhere else is easy to test.
Consider, for example, reading bytes from a file. That kind of interface looks sort of like a function, that accepts a filename as an argument and either returns an array of bytes, or some sort of exception. Implementing that is a straight forward exercise in most languages, and the code is so text book familiar that it falls clearly into the category of "so simple there are obviously no deficiencies".
Because the code is simple and stable, you don't need to test it at anywhere near the frequency that you test code you regularly refactor. Thus, the cost benefit analysis supports the conclusion that you can delegate your occasional tests of this code to more expensive techniques.
100% statement coverage was never the goal of TDD (although it is really easy to understand how you -- and a small zillion other people -- reached that conclusion). It was primarily about deferring detailed design. So to some degree code that starts simple and changes infrequently was "out of bounds" from the get go.
You can't test everything with TDD unit tests. But if you also have integration tests, you can test those I/O interfaces. You can produce integration tests using TDD.
In practice, some things are impractical to automatically test. Error handling of some rare error conditions or race hazards are the hardest.

How do I refactor unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This has been driving me nuts lately...
What is refactoring?
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
And how do we make sure we don't break anything during refactoring?
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests are used to demonstrate that the behavior of the module is correct before the refactoring.
Okay fine. But how do I proceed if I find a code smell in the unit tests themselves? Say, a test method that does too much? How do I make sure I don't break anything while refactoring the unit tests?
Do I need some kind of meta-tests? Is it unit tests all the way down?
Or do unit tests simply not obey the normal rules of refactoring?
In my experience, there are two reasons to trust tests:
Review
You've seen it fail
Both of these are activities that happen when a test is written. If you keep tests immutable, you can keep trusting them.
Every time you modify a test, it becomes less trustworthy.
You can somewhat alleviate that problem by repeating the above process: review the changes to the tests, and temporarily change the System Under Test (SUT) so that you can see the tests fail as expected.
When modifying tests, keep the SUT unchanged. Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
With respect this is an older post, it was referenced in a comment on my post about TDD in practice. So upon review, I'd like to throw in my two cents.
Mainly because I feel the accepted answer makes the slippery statement:
Every time you modify a test, it becomes less trustworthy.
I take issue with the word modify. In regards to refactoring such words like change, modify, etc are often avoided as they carry implications counter to refactoring.
If you modify a test in the traditional sense there is risk you introduced a change that made the test less trustworthy.
However, if you modify a test in the refactor sense then the test should be no less trustworthy.
This brings me back to the original question:
How do I refactor unit tests?
Quite simply, the same as you would any other code - in isolation.
So, if you want to refactor your tests, don't change the code, just change your tests.
Do I need test for my tests?
No. In fact, Kent Beck addresses this exact question in his Full Stack Radio interview, saying:
Your code is the test for your tests
Mark Seemann also notes this in his answer:
Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
In the end, this is not so much about how to refactor tests as much as it is refactoring in general. The same principles apply, namely refactoring restructures code without changing its external behavior. If you don't change the external behavior, then no trust is lost.
How do I make sure I don't break anything while refactoring the unit tests?
Keep the old tests as a reference.
To elaborate: unit tests with good coverage are worth their weight in results. You don't keep them for amazing program structure or lack of duplication; they're essentially a dataset of useful input/output pairs.
So when "refactoring" tests, it only really matters that the program tested with the new set shows the same behaviour. Every difference should be carefully, manually inspected, because new program bugs might have been found.
You might also accidentally reduce the coverage when refactoring. That's harder to find, and requires specialized coverage analysis tools.
you don't know you won't break anything. to avoid the problem of 'who will test our tests?' you should keep tests as simple as possible to reduce the possibility of making an error.
when you refactor tests you can always use automatic refactoring or other 'trusted' methods like method extraction etc.
you also often use existing testing frameworks. they are tested by their creators. so when you start to build your own (even simple one) framework, complex helper methods etc, you can always test it

Is unit testing a burden? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I haven't done much unit testing (never TDD'd), but I'm working on a project that I'd like at least the business layer to be unit-testable and there's a couple things I just can't seem to understand.
Say I have a feature that:
Prompts user for an Excel workbook filename
Loads the contents of the workbook into a grid
Displays that grid to the user on a form, which also takes a couple inputs from user
Displays a modal form with a progress bar that updates while the feature asynchronously...
Performs such or such operation (based on user inputs on the now closed form) and - roughly said, ends up importing the grid content into a database table.
I think I managed to properly separate business, data and presentation concerns here: the form contains only presentation-related code, data operations are method calls made on the DAL (Linq to SQL) and everything is coordinated in the business logic layer. Dependencies are injected in constructors; for example the form wants a DataTable, not an Excel workbook filename.
If I think of tests I could (or should) write, I ...simply can't think of any, other than trivial (useless?) ones that validate that a Foreach loop does the expected number of iterations, that a GetMilimeterDimensions function really does convert inches to milimeters (like, test that GetMilimeterDimensions(new SizeF(1f,1f)) does return a SizeF { Width = 25.4, Height = 25.4 }. Well reading this now it seems like the function's name should rather be something like ConvertInchToMilimeter and, well, this feels like a function that belongs as a static method to some BusinessMath class. Bad example I guess.
Point is, all these functions and methods I'd want to test, are all private and ultimately called by the class' COM-visible Execute() method. Does that mean I must make them public just for testing? Or, what, embed the behavior of my functionality into some FunctionalityImplementation class that does expose its methods, and have my functionality call these methods instead? Feels like overkill...
Then there's the DAL calls; I'd need to implement some repository pattern in order to mock CRUD operations and be able to write tests that validate whether the expected number of records get inserted... but that's beyond the business layer testability I guess.
Nevertheless, seems like a lot of work just to get a bunch of green dots in some VS plugin. I realize that once the test is written, code changes that break a test make you thankful the test was written in the first place, but I think I'm completely missing the point of unit testing and that tests I would write would be meaningless if at all useful; please help me out here, the more I think about it the more it seems to me unit tests are just additional work that imposes its design patterns.
Don't get me wrong (with the question's title I guess), I do see the benefits of TDD, I've read ASP.NET MVC books written by unit test enthusiasts, but I just can't seem to wrap my head around how to apply these principles to a simple functionality, let alone to a whole COM-visible library project...
Unit testing isn't a burden; it saves a lot of time, prevents errors, and facilitates refactoring that keeps code maintainable and malleable.
But you might not need unit tests for this project.
The most urgent unit tests would ensure that your business logic does what you think it does. That's really useful where the business logic is complicated, has lots of moving parts, and is likely to grow more complex over time:
Person *p = ...sample person....;
LoanRequest loan(p, $1,000,000);
Assert(loan.CanBeApproved(bankPolicy,region,market,term),true);
But it's not essential if the underlying logic is simple and evidently correct
Price=Total+Shipping;
Similarly, if you're writing a quick widget for immediate short-term use, long-term maintenance isn't your first concern and the role of tests as documentation for future collaborators is probably irrelevant. If you're building a product, though, you'll want those tests.
In general, unit tests should primarily be concerned with public behavior. But occasionally you may find it much easier to verify intermediate results that are normally private. Making a method public, or providing a special hook for testing, can be a reasonable price to pay for confidence that the software does what you think, and will continue to do what you think even after other people start changing it.

New to unit testing, how to write great tests? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.
This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.
I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.
Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
I think you are doing it wrong.
A unit test should:
test one method
provide some specific arguments to that method
test that the result is as expected
It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.
If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.
You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:
testAdd()
{
int x = 5;
int y = -2;
int expectedResult = 3;
Calculator calculator = new Calculator();
int actualResult = calculator.Add(x, y);
Assert.AreEqual(expectedResult, actualResult);
}
Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.
For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.
Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).
Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.
All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.
Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.
It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:
Moving existing code to Test Driven Development
I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.
Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).
Keep your tests small: one test per requirement.
Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.
Unit testing is about the output you get from a function/method/application.
It does not matter at all how the result is produced, it just matters that it is correct.
Therefore, your approach of counting calls to inner methods and such is wrong.
What I tend to do is sit down and write what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.
Try writing a Unit Test before writing the method it is going to test.
That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.
You should always be testing the results of the method, not how the method gets those results.
tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box then it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is to write an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the
method did not change.
This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.

What is a "Unit"? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In the context of unit testing, what is a "unit"?
I usually define it as a single code execution path through a single method. That comes from the rule of thumb that the number of unit tests required to test a method is equal to or greater than the method's cyclomatic complexity number.
While the definition can vary, a "unit" is a stand-alone piece of code.
Usually, it's a single Class.
However, few classes exist in isolation. So, you often have to mock up the classes that collaborate with your class under test.
Therefore, a "unit" (also called a "fixture") is a single testable thing -- usually a class plus mock-ups for collaborators.
You can easily test a package of related classes using the unit test technology. We do this all the time. There are few or no mocks in these fixtures.
In fact, you can test whole stand-alone application programs as single "units". We do this, also. Providing a fixed set of inputs and outputs to be sure the overall application does things correctly.
A unit is any element that can be tested in isolation. Thus, one will almost always be testing methods in an OO environment, and some class behaviours where there is close coupling between methods of that class.
In my experience the debate about "what is a unit" is a waste of time.
What matters far more is "how fast is the test?" Fast tests run at the rate of 100+/second. Slow tests run slow enough that you don't reflexively run them every time you pause to think.
If your tests are slow you won't run them as frequently, making the time between bug injection and bug detection longer.
If your tests are slow you probably aren't getting the design benefits of unit testing.
Want fast tests? Follow Michael Feather's rules of unit testing.
But if your tests are fast and they're helping you write your code, who cares what label they have?
We define 'unit' to be a single class.
As you rightly assert 'unit' is an ambiguous term and this leads to confusion when developers simply use the expression without adding detail. Where I work we have taken the time to define what we mean when we say 'unit test', 'acceptance test', etc. When someone new joins the team they learn the definitions that we have.
From a practical point of view there will likely always be differences of opinion about what a 'unit' is. I have found that what is important is simply that the term is used consistently within the context of a project.
Were I work a 'unit' is a function. That is because we are not allowed to use any thing other than functional decomposition in our design (no OOP). I agree 100% with Will's answer. At least that is my answer within the paradigm of my work in embedded programming for engine and flight controls and various other systems.
Can be different things. A Class, a Module, a File, ... Choose your desired granularity of testing.
I would say a unit is a 'black box' which may be used within an application. It is something which has a known interface and delivers a well-defined result. This is something which has to work according to a design-spec and must be testable.
Having said that, I often use unit testing when building items within such 'black boxes' just as a development aid.
My understanding (or rationale) is that units should follow a hierarchy of abstraction and scope similar to the hierarchical decomposition of your code.
A method is a small and often atomic (conceptually) operation at a low level of abstraction, so it should be tested
A class is a mid-level concept that offers services and states and should therefore be tested.
A whole module (especially if its components are hidden) is a high-level concept with a limited interface, so it should be tested, etc.
Since many bugs arise from the interactions between multiple methods and classes, I do not see how unit testing only individual methods can achieve coverage, until you already have methods that make use of every important combination, but that would indicate that you didn't test enough before writing client code.
That's not important. Of course it's normal you ask to get started on unit testing, but I repeat it, it's not important.
The unit is something along the lines of :
the method invoked by the test. In OOP this method has to been invoked on an instance of a class (except static methods)
a function in procedural languages.
But, the "unit", function or method, may also invoke another "unit" from the application, which is likewise exercised by the test. So the "unit" can span over several functions or even several classes.
"The test is more important than the unit" (testivus). A good test shall be :
Automatic - execution and diagnostic
Fast - you'll run them very often
Atomic - a test shall test only one thing
Isolated - tests shall not depend on each other
Repeatable - result shall be deterministic
I would say that a unit in unit testing is a single piece of responsibility for a class.
Of course this opinion comes from the way we work:
In my team we use the term unit tests for tests where we test the responsibilities of a single class. We use mock objects to cover for all other objects so that the classes responsibilities are truly isolated and not affected if other objects would have errors in them.
We use the term integration tests for tests where we test how two or more classes are functioning together, so that we see that the actual functionality is there.
Finally we help our customers to write acceptance tests, which operate on the application as a whole as to see what actually happens when a "user" is working with the application.
This is what makes me think it is a good description.