How verbose/granular should your tests be? [closed] - c++

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently started a new project where I decided to adopt writing unit tests for most functions. Prior to this my testing was limited to sporadically writing test "functions" to ensure something worked as expected, and then never bothering to update the test functions, clearly not good.
Now that I've written a fair amount of code, and tests, I'm noticing that I'm writing a lot of tests for my code. My code is generally quite modular, in the sense that I try to code small functions that do something simple, and then chain them together in a larger function as required, again, accepted best practice.
But, I now end up writing tests for both the individual "building block" functions (quite small tests), as well as tests for the function that chains them together, and testing the result there as well, obviously the result will be different, but since the inputs are similar, I'm duplicating a lot of test code (the setting up the input portions , which are slightly different in each but not by much, since they're not identical I can't just use a text fixture..).
Another concern is I try to adhere quite strictly to test one thing per test, so I write a single test for every different feature within the function, for instance, if there's some extra input that can be passed to the function, but which is optional, I write one version which adds the input, one that doesn't and test them separately. The setup here is again mostly identical except for the input I added, again not exactly the same, so using a fixture doesn't feel "right".
Since this is my first project with everything being fully unit tested, I just wanted to make sure I was doing stuff correctly and that the code duplication in tests is to be expected.. so, my question is: Am I doing things correctly? If not, what should I change?
I code in C and C++.
On a side note, I love the testing itself, I'm far more confident of my code now.
Thanks!

Your question tries to address many things, and I can try to answer only some of them.
Try to get as high coverage as possible (ideally 100%)
Do not use real resources for your unit test, or at least try to avoid it. You can use mocks and stubs for that.
Do not unit test 3rd party libraries.
You can break dependencies using dependency injections or functors. That way the size of your tests can decrease.

Related

Are there things that you can't test when applying TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been applying TDD to some new projects to get the hang of things, and I understand the basic flow: Write tests that fail, write code to pass tests, refactor if needed, repeat.
To keep the tests fast and decoupled, I abstract out things like network requests and file I/O. These are usually abstracted into interfaces which get passed through using dependency injection.
Usually development is very smooth until the end where I realize I need to implement these abstract interfaces. The whole point of abstracting them was to make it easily testable, but following TDD I would need to write a test before writing the implementation code, correct?
For example, I was looking at the tdd-tetris-tutorial https://github.com/luontola/tdd-tetris-tutorial/tree/tutorial/src/test/java/tetris. If I wanted to add the ability to play with a keyboard, I would abstract away basic controls into methods inside a Controller class, like "rotateBlock", "moveLeft", etc. that could be tested.
But at the end I would need to add some logic to detect keystrokes from the keyboard when implementing a controller class. How would one write a test to implement that?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
I use a slightly different spelling: not all things can be tested at the same level of cost effectiveness.
The "trick", so to speak, is to divide your code into two categoies: code that is easy to test, and code that is so obvious that you don't need to test it -- or not as often.
The nice thing about simple adapters is that (once you've got them working at all) they don't generally need to change very much. All of the logic lives somewhere else and that somewhere else is easy to test.
Consider, for example, reading bytes from a file. That kind of interface looks sort of like a function, that accepts a filename as an argument and either returns an array of bytes, or some sort of exception. Implementing that is a straight forward exercise in most languages, and the code is so text book familiar that it falls clearly into the category of "so simple there are obviously no deficiencies".
Because the code is simple and stable, you don't need to test it at anywhere near the frequency that you test code you regularly refactor. Thus, the cost benefit analysis supports the conclusion that you can delegate your occasional tests of this code to more expensive techniques.
100% statement coverage was never the goal of TDD (although it is really easy to understand how you -- and a small zillion other people -- reached that conclusion). It was primarily about deferring detailed design. So to some degree code that starts simple and changes infrequently was "out of bounds" from the get go.
You can't test everything with TDD unit tests. But if you also have integration tests, you can test those I/O interfaces. You can produce integration tests using TDD.
In practice, some things are impractical to automatically test. Error handling of some rare error conditions or race hazards are the hardest.

TDD: Why do 'Red Green Refactor' instead of just 'Green Refactor'? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am 4 months into professional software development. TDD is non-negotiable at my company GO-JEK.
Here is my observation: People tend to first write code, and then write tests for it. Apparently, that is more convenient for people with 4-5 years experience of s/w development and not following TDD before.
So, what is the reason people first write a failing test, then write code to pass it? Why people don't write code first and then add a test for it?
We can perform refactoring either ways
This is a good question. Since we ultimately want our tests to pass, why not write them so they pass in the first place?
The answer is that we really want our tests to be driving development. We want the tests to come first. Because when we write a test which needs some functionality, that's a concrete expression of what is needed, and that new bit of functionality is well defined. Initially that functionality does not exist (so the test is red); when we have successfully added the functionality, the test is green. It's a clean determination: either the functionality is present and the test is passing - or it isn't, and the test is failing.
If instead we write the test green (with the functionality already present) we may have written more functionality than we actually need. Or we may have written bad code - the functionality is present but wrong - and a correspondingly bad test. When we write the test first, we witness the code base transitioning from the state of lacking the necessary functionality, to having it - and we know with a fair degree of confidence we've gotten it right.
By writing a failing test, you know your test can fail, Which is the point of a unit test, to fail when not working. It's possible that by only ever seeing the test pass, it could be written in a way that would never fail, i.e forget to add an assert or a poorly written test.
Also by incrementally passing 'failing' tests, you know your are adding value with each code change.
My preferred method is to write a test and then write code to make it pass (TDD), but even when you wind up writing tests for existing code (for instance working with legacy code) you still want to do the RED - GREEN - Refactor process.
Writing a test that you believe will fail for the existing code (say by reversing your assert) and then verifying that it does, in fact, fail will give you confidence that your test is working correctly and when you set the assert back to the correct sense it will pass convincingly. Otherwise, how do you know that you are not getting a false positive from the test - or that the test is actually being run by the test runner (with some unit test frameworks)?

How do I refactor unit tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This has been driving me nuts lately...
What is refactoring?
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
And how do we make sure we don't break anything during refactoring?
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests are used to demonstrate that the behavior of the module is correct before the refactoring.
Okay fine. But how do I proceed if I find a code smell in the unit tests themselves? Say, a test method that does too much? How do I make sure I don't break anything while refactoring the unit tests?
Do I need some kind of meta-tests? Is it unit tests all the way down?
Or do unit tests simply not obey the normal rules of refactoring?
In my experience, there are two reasons to trust tests:
Review
You've seen it fail
Both of these are activities that happen when a test is written. If you keep tests immutable, you can keep trusting them.
Every time you modify a test, it becomes less trustworthy.
You can somewhat alleviate that problem by repeating the above process: review the changes to the tests, and temporarily change the System Under Test (SUT) so that you can see the tests fail as expected.
When modifying tests, keep the SUT unchanged. Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
With respect this is an older post, it was referenced in a comment on my post about TDD in practice. So upon review, I'd like to throw in my two cents.
Mainly because I feel the accepted answer makes the slippery statement:
Every time you modify a test, it becomes less trustworthy.
I take issue with the word modify. In regards to refactoring such words like change, modify, etc are often avoided as they carry implications counter to refactoring.
If you modify a test in the traditional sense there is risk you introduced a change that made the test less trustworthy.
However, if you modify a test in the refactor sense then the test should be no less trustworthy.
This brings me back to the original question:
How do I refactor unit tests?
Quite simply, the same as you would any other code - in isolation.
So, if you want to refactor your tests, don't change the code, just change your tests.
Do I need test for my tests?
No. In fact, Kent Beck addresses this exact question in his Full Stack Radio interview, saying:
Your code is the test for your tests
Mark Seemann also notes this in his answer:
Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
In the end, this is not so much about how to refactor tests as much as it is refactoring in general. The same principles apply, namely refactoring restructures code without changing its external behavior. If you don't change the external behavior, then no trust is lost.
How do I make sure I don't break anything while refactoring the unit tests?
Keep the old tests as a reference.
To elaborate: unit tests with good coverage are worth their weight in results. You don't keep them for amazing program structure or lack of duplication; they're essentially a dataset of useful input/output pairs.
So when "refactoring" tests, it only really matters that the program tested with the new set shows the same behaviour. Every difference should be carefully, manually inspected, because new program bugs might have been found.
You might also accidentally reduce the coverage when refactoring. That's harder to find, and requires specialized coverage analysis tools.
you don't know you won't break anything. to avoid the problem of 'who will test our tests?' you should keep tests as simple as possible to reduce the possibility of making an error.
when you refactor tests you can always use automatic refactoring or other 'trusted' methods like method extraction etc.
you also often use existing testing frameworks. they are tested by their creators. so when you start to build your own (even simple one) framework, complex helper methods etc, you can always test it

Is it acceptable as a professional developer not to write unit tests? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Just wondering on the pros and cons on TDD/automated unit testing and looking for the community's view on whether it's acceptable for professional developers to write applications without supporting unit tests?
Re-asked on Programmers: https://softwareengineering.stackexchange.com/questions/159572/is-it-acceptable-as-a-professional-developer-not-to-write-unit-tests
I bet I'll get -1 -ed for this, but I still say: if you have other measures to ensure quality, including avoiding regression, program validation, program verification, then no.
The only problem is usually that people don't have any other tools than unit testing to achieve this.
In case you have formally tested models (there's a tool, that actually tests it, or it was constructed in a way which ensures it's valid), and you have formally tested ways to ensure that the actually running software is conform to that model, then it's fine.
Example: if you are sure, that the code you wrote in ruby will act as you'd expect it (because you or someone else tested the ruby interpreter and it doesn't have bugs, or you use only a subset of features known to be safe) then its fine. Usually, we trust C compilers and CPUs in this manner.
Also, if a program is only to be used once,there's no regression problem! If I write a one-liner in bash, which will calculate something for me, I might test it first manually on fake data, then run it on the real one - no need to write an automated test.
If you take the blame, you can also go with along with assumptions: I assume usually, that eclipse is pretty good at creating setters and getters, and I don't test on those. Also, I assume, that in case there'd be any problem with java's Collection classes in Java 7, it'd have turned out by now. But in case there's a trouble, it's your personal trouble. Don't blame anyone.
Personally, I rarely use unit testing on certain codes as I formally test them while they're still flowcharts on a piece of paper, and I ensure that I only use subsets of the language/libraries which are known to work in such situations. Also, I never let code out without peer review. Still, it's sometimes better if there's someone who runs an acceptance test on them...
It is up to you. The question is more philosophical in nature.
Unit tests are just a tool to help you. You can chose to ignore them. However, if you are going to work on a more than trivial project I would advise you to use unit tests.
Yes, they take time, too to write. But in the end you will save a lot if there is any refactoring done or some parts of the code need to be changed.
As always: It depends.
Generally speaking, unit tests are a good thing: they catch a whole class of errors, they verify that particular parts of your code work as expected under given circumstances, and they make it easier to track down errors when something does go wrong. So unless you have good reasons not to, you should write unit tests.
Good reasons not to write unit tests include:
Making relatively small changes to a codebase that is structured badly and hardly testable because of this (usually, the reason is that there is little separation of concerns, and testable units cannot be isolated for testing without intrusive changes to the codebase itself).
The nature of the problem domain makes the code inherently untestable. This is rare, but it happens - for example, it is very hard to come up with meaningful unit tests for a routine that draws a GUI: you'd have to make it render to a mocked surface and then check individual pixels, but you'd also have to mock all the parameters that influence layout decisions, etc.; while this is theoretically possible, it's not usually worth the effort, and one should opt for manual or semi-automatic testing in such cases.
The project is a tiny throwaway program with such a small scope and such a short lifecycle that the benefit gained from unit testing (increased maintainability, decreased complexity) is marginal. Keep in mind, however, that software tends to live longer than it was designed for, and your one-off throwaway script might very well end up becoming a mission-critical part of the company's processes.

New to unit testing, how to write great tests? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.
This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.
I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.
Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
I think you are doing it wrong.
A unit test should:
test one method
provide some specific arguments to that method
test that the result is as expected
It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.
If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.
You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:
testAdd()
{
int x = 5;
int y = -2;
int expectedResult = 3;
Calculator calculator = new Calculator();
int actualResult = calculator.Add(x, y);
Assert.AreEqual(expectedResult, actualResult);
}
Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.
For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.
Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).
Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.
All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.
Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.
It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:
Moving existing code to Test Driven Development
I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.
Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).
Keep your tests small: one test per requirement.
Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.
Unit testing is about the output you get from a function/method/application.
It does not matter at all how the result is produced, it just matters that it is correct.
Therefore, your approach of counting calls to inner methods and such is wrong.
What I tend to do is sit down and write what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.
Try writing a Unit Test before writing the method it is going to test.
That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.
You should always be testing the results of the method, not how the method gets those results.
tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box then it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is to write an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the
method did not change.
This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.