Guidelines for writing a test suite [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What are the best practices/guidelines for writing test suite for C++ projects?

This is a very broad question. For unit testing and Test Driven Development (TDD), there is some useful (if somewhat platidinous in parts) guidance on this from Microsoft - you can overlook the Visual Studio-specific advice, if it does not apply.
If you are looking for guidance on system or performance testing, I would clarify your question. There is a decent broader rationale in the docs for Boost.Test.
There are several unit testing best
practices to review before we close.
Firstly, TDD is an invaluable
practice. Of all the development
methodologies available, TDD is
probably the one that will most
radically improve development for many
years to come and the investment is
minimal. Any QA engineer will tell you
that developers can't write successful
software without corresponding tests.
With TDD, the practice is to write
those tests before even writing the
implementation and ideally, writing
the tests so that they can run as part
of an unattended build script. It
takes discipline to begin this habit,
but once it is established, coding
without the TDD approach feels like
driving without a seatbelt.
For the tests themselves there are
some additional principals that will
help with successful testing:
Avoid creating dependencies between
tests such that tests need to run in a
particular order. Each test should be
autonomous.
Use test initialization
code to verify that test cleanup
executed successfully and re-run the
cleanup before executing a test if it
did not run.
Write tests before
writing the any production code
implementation.
Create one test class
corresponding to each class within the
production code. This simplifies the
test organization and makes it easy to
choose where to places each test.
Use
Visual Studio to generate the initial
test project. This will significantly
reduce the number of steps needed when
manually setting up a test project and
associating it to the production
project.
Avoid creating other machine
dependent tests such as tests
dependent on a particular directory
path.
Create mock objects to test
interfaces. Mock objects are
implemented within a test project to
verify that the API matches the
required functionality.
Verify that
all tests run successfully before
moving on to creating a new test. That
way you ensure that you fix code
immediately upon breaking it.
Maximize
the number of tests that can be run
unattended. Make absolutely certain
that there is no reasonable unattended
testing solution before relying solely
on manual testing.

TDD is certainly one set of bests practices. When retrofitting tests, I aim for two things code coverage and boundary condition coverage. Basically you should pick inputs to functions such that A) All code paths are tested and better if all permutations of all code paths are tested(sometimes that can be a large number of cases and not really be necessary if path differences are superficially different) and B) That all boundary conditions(those are conditions that cause variation in code path selection) are tested if your code has an if x > 5 in it you test with x = 5, and x = 6 to get both sides of the boundary.

Related

Confused about Classical TDD and Mockist [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Here is an article: https://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting
It's in relation to Classical TDD and Mockist. My understanding was that classes should be tested in isolation therefore ALL dependencies should be stubbed / mocked. However it seems there's a large group of people the Classical TDDers who use real objects according to the article. There are various articles on the internet that emphasize that unit tests should not use real classes other than the SUT of course. For example take a look at this from Microsoft's website on stubs: https://learn.microsoft.com/en-us/visualstudio/test/using-stubs-to-isolate-parts-of-your-application-from-each-other-for-unit-testing
public int GetContosoPrice()
{
var stockFeed = new StockFeed(); // NOT RECOMMENDED
return stockFeed.GetSharePrice("COOO");
}
Can someone clear up my confusion?
Can someone clear up the my confusion?
You don't seem to be confused at all - there are two different schools of thought on what a "unit test" is, and therefore how it should be used.
For instance, Kent Beck, in Test Driven Development By Example, writes
The problem with driving development with small-scale tests ( I call them "unit tests" but they don't match the accepted definition of unit tests very well)....
Emphasis added.
It may help to keep in mind that 20 years ago, the most common testing pattern was the "throw it over the wall to QA" test. Even in cases where automated tests were present, the disciplines required to make those tests effective were not common knowledge.
So it was important to communicate the idea that tests should be isolated from other tests. If developers were going to be running tests as often as the extreme programmers were insisting that they should, then those tests needed to be reliable and fast in wall clock time. Tests that don't share any mutable state (either themselves, or indirectly via the system under test) can be run effectively in parallel, reducing the wall clock time, and therefore reducing the developer interruption that they introduce.
There is a separate discipline that says, in addition to the isolation described above, we should also be striving for tests that check the system in isolation from other parts of the system.
If you want to get a real sense for the history of people with these different ideas talking past each other -- including the history of recognizing that they are talking past each other and trying to invent new labels, a good starting point is the C2 wiki
http://wiki.c2.com/?UnitTest
http://wiki.c2.com/?ShouldUnitTestsTestInteroperations
http://wiki.c2.com/?DeveloperTest
http://wiki.c2.com/?ProgrammerTest
For a modern perspective, you might start with Ham Vocke's Practical Test Pyramid

Confusion about unit testing frameworks? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I get the concept of unit testing and TDD on a whole.
However, I'm still a little confused on what exactly unit testing frameworks are. Whenever I read about unit testing, it's usually an explanation of what it is, followed by "oh here are the frameworks for this language, i.e JUnit".
But what does that really mean? Are framework just a sort of testing library that allows programmers to write simpler/efficient unit tests?
Also, what are the benefits of using a framework? As I understand it, unit testing is done on small chunks of code at a time, i.e a method. However, I could individually write a test for a method without using a unit testing framework. Is it maybe for standardization of testing practices?
I'm just very new to testing and unit-testing, clarification on some basic concepts would be great.
A bit of a broad question, but I think there are certain thoughts that could count as as facts for an answer:
When 5, 10, 100, ... people go forward to "work" with the same idea/concept (for example unit testing) then, most likely, certain patterns respectively best practices will evolve. People have ideas, and by trial and error they find out which of those ideas are helpful and which are not.
Then people start to communicate their ideas, and those "commonly used" patterns undergo discussions and get further refined.
And sooner or later, people start thinking "I am doing the same task over and over again; I should write a program for me to do that".
And that is how frameworks come into existence: they are tools to support certain aspects of a specific activity.
Let's give an example: using a framework like JUnit, I can completely focus on writing test cases. I don't need to worry about accumulation of failure statistics; I don't need to worry how to make sure that really all my tests are executed when I want that to happen.
I simply understand how to use the JUnit framework; and I know how to further utilize JUnit test cases in conjunction with build systems such as gradle or maven - in order to have all my unit tests executed automatically; each time I push a commit into my source code management system for example.
Of course you can re-invent the wheel here; and implement all of that yourself. But that is just a waste of time. It is like saying: "I want to move my crop to the market - let's start by building the truck myself". No. You rent or buy a pre-build truck; and you use that to do what you actually want to do (move things around).

How do I measure the benefits of unit testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm in the process of pushing my company towards having unit tests as a major part of the development cycle. I've gotten the testing framework working with our MVC framework, and multiple members of the team are now writing unit tests. I'm at the point, though, where there's work that needs to be done to improve our hourly build, the ease of figuring out what fixtures you need to use, adding functionality to the mock object generator, etc., etc., and I want to be able to make the case for this work to management. In addition, I'd like us to allocate time to write unit tests for the most critical pieces of existing code, and I just don't see that happening without a more specific case than "everyone knows unit tests are good".
How do you quantify the positive impact of (comprehensive and reliable) unit tests on your projects? I can certainly look at the number and severity of bugs filed and correlate it with our increases in code coverage, but that's a rather weak metric.
Sonar is a company that makes a very interesting code inspection tool, they actually try to measure technical debt programaticaly, which correlates untested code and developer price per hour.
Quantification of test-quality is very difficult.
I see code-coverage only as guidance not as test-quality metric. You can literally write test of 100% code-coverage without testing anything (e.g. no asserts are used at all). Also have a look at my blog-post where I warn against metric-pitfalls.
The only sensible quantitative metric I know of and which counts for business is really reduced effort of bug-fixes in production-code. Also reduced bug-severity. Still it is very difficult to isolate that unit-tests are the only source of this success (it could also be improvement of process or communication).
Generally I would focus on the qualitative approach:
Do developers feel more comfortable changing code (because tests are a trustworthy safety net)?
When bugs occur in production analysis really shows that it was untested code (vice versa a minor conlusion that it wouldn't have occurred if there had been unit test)

Can unit tests be implemented effectively with agile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Soon I will be involved in a project that will be using the agile project management/development approach, with 5 (or so) 2 week sprints. The project will be using a DDD design pattern which I have found in the past works great with unit testing, hence I have enthusiasim to use it for this project as well. The only problem is given the following factors I am unsure as to whether unit testing can be successfully implemented with agile development:
Potential for constantly changing requirements (requirements change, tests break, tests need to be updated too).
Time factor (unit tests can make dev take a fair bit longer and if requirements change towards the end of a sprint there may be too little time to update tests and production code at the best quality).
I have a feeling that if/when requirements change (especially if towards the end of a sprint) and given the tight deadlines unit tests will become a burden. Anyone have any good advice on the matter?
I think it cuts both ways. On one hand, yes, unit tests are extra code which requires extra maintenance, and will slow you down a bit. On the other hand, if requirements start evolving, having unit tests in place will be a life saver in making sure what you are changing still works.
Unless you have unit tests with high coverage, the cost of change will grow exponentially as the projects moves forward. So basically, the more change you anticipate the MORE you will actually need your unit tests.
Secondly, good unit tests depend on very few and small feature pieces in your production code. When this is true, only a few tests will be impacted when a feature changes. Basically, each test tests just one thing and small piece of production code. The key to writing unit tests that follow this principle is to decouple your code and test in isolation.
Thirdly, you need to get a better understanding of the concept of DONE and why its definition is so important in terms of sustainable development. Basically, you can't go fast over time in a sustainable fashion if your team compromizes the concept of DONE in the short term.
Considering 10+ weeks worth of code with no test coverage makes me cringe. When will you have time to manually test all that code? And under evolving requirements, you will use a lot more time tracking down impacts the changes will have throughout your code base.
I cannot advice strongly enough to use unit testing. Even when doing DDD, let unit tests drive implementation. Coupled with good patterns like DI/IoC and SRP, you should find both your code base and tests to be more resilient to change, and thus save you a lot of time throughout those sprints.

Guidelines for better unit tests [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Jimmy Bogard, wrote an article: Getting value out of your unit tests, where he gives four rules:
Test names should describe the what and the why, from the user’s perspective
Tests are code too, give them some love
Don’t settle on one fixture pattern/organizational style
One Setup, Execute and Verify per Test
In your opinion these guidelines are complete? What are your guidelines for unit tests?
Please avoid specific language idioms, try to keep answers language-agnostic .
There's an entire, 850 page book called xUnit Test Patterns that deal with this topic, so it's not something that can be easily boiled down to a few hard rules (although the rules you mention are good).
A more digestible book that also covers this subject is The Art of Unit Testing.
If I may add the rules I find most important, they would be:
Use Test-Driven Development. It's the by far the most effective road towards good unit tests. Trying to retrofit unit tests unto existing code tend to be difficult at best.
Keep it simple: Ideally, a unit test should be less than 10 lines of code. If it grows to much more than 20 lines of code, you should seriously consider refactoring either the test code, or the API you are testing.
Keep it fast. Unit test suites are meant to be executed very frequently, so aim at keeping the entire suite under 10 s. That can easily mean keeping each test under 10 ms.
Writing unit-tests is simple, it is writing unit-testable code that is difficult.
The Way of Testivus
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Break the code in test regularly to see the effectiveness of the unit tests
Take a look at the code coverage of your tests, and try to make it reasonably complete (for error cases I'd use some discretion whether to test them or not).
For many more good ideas to write unit tests, search stackoverflow.com.