I have installed on my computer C++Test only with UnitTest license (only Unit Test license) as a Visual Studio 2005 plugin ( cpptest_7.2.11.35_win32_vs2005_plugin.exe ).
I have a sample similar to the following:
bool MyFunction(... parameters... )
{
bool bRet = true;
// do something
if( some_condition )
{
// do something
bRet = CallToAFunctionThatCanReturnBothTrueAndFalse....
}
else
{
bRet = false;
// do something
}
if(bRet == false)
{
// do something
}
return bRet;
}
In my case after running the coverage tool I have the following results (for a function similar to the previously mentioned):
[LC=100 BC=100 PC=75 DC=100 SCC=100 MCDC=50 (%)]
I really don't understand why I don't have 100% coverage when it comes to PathCoverage (PC).
Also if someone who has experience with C++Test Parasoft could explain the low MCDC coverage for me that would be great.
What should I do to increase coverage? as I'm out of ideas in this case.
Directions to (some parts of) the documentation are welcome.
Thank you,
Iulian
I can't help with the specific tool you're using, but the general idea with path coverage is that each possible path through the code should be executed.
If you draw a flowchart through the program, branching at each if/break/continue, etc. you should see which paths your tests are taking through the program. To get 100% (which isn't totally necessary, nor does it assure a perfect test) your test will have to go down every branch of the code, executing every line.
Hope that helps.
This is a good reference on the various types of code coverage: http://www.bullseye.com/coverage.html.
MCDC: To improve MCDC coverage you'll need to look at some_condition. Assuming it's a complex boolean expression, you'll need to look at whether you're exercising the necessary combinations of values. Specifically, each boolean sub-expression needs to be exercised true and false.
Path: One of the things mentioned in the link above as being a disadvantage of path coverage is that many paths are impossible to exercise. That may be the case with your example.
You need at least two testcases to get 100% coverage. One where some_condition is true and one where it is not. If you have that you should get 100% coverage.
Though you should see 100% coverage as perfect. You would need 3 tests for that in this case so all combinations can be tested. Look up cyclomatic complexity to learn more about that.
There are four hypothetical paths through that function. Each if-clause doubles the number of paths. Each if-statement is a branch where you can go two different ways. So whenever your tool encounters an "if", it assumes the code can either take the "true" branch or the "false" branch. However, this is not always possible. Consider:
bool x = true;
if (x) {
do_something();
}
The "false" branch of the if-statement is unreachable. This is an obvious example, but when you factor in several if-statements it becomes increasingly difficult to see whether a path is possible or not.
There are only three possible paths in your code. The path that takes the "false" branch in the first if statement and the "true" branch in the second is unreachable.
Your tool is not smart enough to realize that.
That being said, even if the tool is perfect, obtaining 100 % path coverage is probably unlikely in a real application. However, very low path coverage is a sure sign that your method has too high cyclomatic complexity.
Personally, I think it's bad form to start ANY function with
bool retCode = true;
You're making an explicit assumption that it'll succeed by default, and then fail under certain conditions.
Programmers coming after you will not make this same assumption.
Fail fast, fail early.
And as others have said, if you want to test failure cases, you have to code tests that fail.
Related
This is a very basic question but I still cannot find the appropriate answer. In my test there is a possibility to have null values and because of that the last stage (Act) starts looking a little bit strange (it is no longer act only). What I mean is the following:
Assert.IsNotNull(variable);
var newVariable = variable.Property;
Assert.IsNotNull(newVariable);
var finalVariable = newVariable.AnotherProperty;
Assert.AreEqual(3, finalVariable.Count);
Now they are obviously related and I have to be sure that the values are not null, but also there are three asserts in one test and the act part starts to look not right.
So what is the general solution in such cases? Is there anything smarter than 3 tests with one assert each and checks for null before the asserts of the last 2?
Basically there are two ways of dealing with your problem:
Guard assertions: extra asserts making sure data is in known state before proper test takes place (that's what you're doing now).
Moving guard assertions to their own tests.
Which option to chose largely depends on code under test. If preconditions would be duplicated in other tests, it's a hint for separate test approach. If precondition has reflection in production code, it's again hint for separate test approach.
On the other hand, if it's only something you do to boost your confidence, maybe separate test is too much (yet as noted in other answers, it might be a sign that you're not in full control of your test or that you're testing too many things at once).
I think you should split this test into three tests and name them accordingly to what's happening. It's perfectly sensible even if your acts in those tests are same, you are testing different scenarios by checking return value of the method.
Nulls are royal pain. The question is, can they legitimately exist?
Let's separate our discussion to code and tests.
If the null shouldn't exist then the code itself, not the tests, should check and verify that they are not null. For this reason each and every method of my code is built using a snippet that checks the arguments:
public VideoPosition(FrameRate theFrameRate, TimeSpan theAirTime)
{
Logger.LogMethod("theVideoMovie", theFrameRate, "theAirTime", theAirTime);
try
{
#region VerifyInputs
Validator.Verify(theFrameRate);
Validator.Verify(theAirTime);
Validator.VerifyTrue(theAirTime.Ticks >= 0, "theAirTime.Ticks >= 0");
If null ARE legitimate in the code, but you are testing a scenario where the returned values shouldn't be null, then of course you have to verify this in your testing code.
In your Unit Test you should be able to control every input to your class under test. This means that you control if your variable has a value or not.
So you would have one unit test that forces your variable to be null andnthen asserts this.
You will then have another test where you can be sure that your variable has a value and you omly need the other asserts.
I wrote a blog about this some time ago. Maybe it can help: Unit Testing, hell or heaven?
I'm implementing automated testing with CppUTest in C++.
I realize I end up almost copying and pasting the logic to be tested on the tests themselves, so I can check the expected outcomes.
Am I doing it right? should it be otherwise?
edit: I'll try to explain better:
The unit being tested takes input A, makes some processing and returns output B
So apart from making some black box checks, like checking that the output lies in an expectable range, I would also like to see if the output B that I got is the right outcome for input A I.E. if the logic is working as expected.
So for example if the unit just makes A times 2 to yield B, then in the test I have no other way of checking than making again the calculation of A times 2 to check against B to be sure it went alright.
That's the duplication I'm talking about.
// Actual function being tested:
int times2( int a )
{
return a * 2;
}
.
// Test:
int test_a;
int expected_b = test_a * 2; // here I'm duplicating times2()'s logic
int actual_b = times2( test_a );
CHECK( actual_b == expected_b );
.
PS: I think I will reformulate this in another question with my actual source code.
If your goal is to build automated tests for your existing code, you're probably doing it wrong. Hopefully you know what the result of frobozz.Gonkulate() should be for various inputs and can write tests to check that Gonkulate() is returning the right thing. If you have to copy Gonkulate()'s convoluted logic to figure out the answer, you might want to ask yourself how well you understand the logic to begin with.
If you're trying to do test-driven development, you're definitely doing it wrong. TDD consists of many quick cycles of:
Writing a test
Watching it fail
Making it pass
Refactoring as necessary to improve the overall design
Step 1 - writing the test first - is an essential part of TDD. I infer from your question that you're writing the code first and the tests later.
So for example if the unit just makes A times 2 to yield B, then in
the test I have no other way of checking than making again the
calculation of A times 2 to check against B to be sure it went
alright.
Yes you do! You know how to calculate A times two, so you don't need to do this in code. if A is 4 then you know the answer is 8. So you can just use it as the expected value.
CHECK( actual_b == 8 )
if you are worried about magic numbers, don't be. Nobody will be confused about the meaning of the hard coded numbers in the following line:
CHECK( times_2(4) == 8 )
If you don't know what the result should be then your unit test is useless. If you need to calculate the expected result, then you are either using the same logic as the function, or using an alternate algorithm to work out the result.In the first case, if the logic that you duplicate is incorrect, your test will still pass! In the second case, you are introducing another place for a bug to occur. If a test fails, you will need to work out whether it failed because the function under test has a bug, or if your test method has a bug.
I think this one is a though to crack because it's essentially a mentality shift. It was somewhat hard for me.
The thing about tests is to have your expectancies nailed down and check if your code really does what you think it does. Think in ways of exercising it, not checking its logic so directly, but as a whole. If that's too hard, maybe your function/method just does too much.
Try to think of your tests as working examples of what your code can do, not as a mathematical proof.
The programming language shouldn't matter.
var ANY_NUMBER = 4;
Assert.That(times_2(ANY_NUMBER), Is.EqualTo(ANY_NUMBER*2)
In this case, I wouldn't mind duplicating the logic. The expected value is readable as compared to 8. Second this logic doesn't look like a change-magnet. Relatively static.
For cases, where the logic is more involved (chunky) and prone to change, duplicating the logic in the test is definitely not recommended. Duplication is evil. Any change to the logic would ripple changes to the test. In that case, I'd use hardcoded input-expected output pairs with some readable pair-names.
Suppose I have the following function (pseudocode):
bool checkObjects(a, b)
{
if ((a.isValid() && (a.hasValue()) ||
(b.isValid() && (b.hasValue()))
{
return true;
}
return false;
}
Which tests should I write to be able to claim that it's 100% covered?
There are total 16 possible input combinations. Should I write 16 test cases, or should I try to act smart and omit some test cases?
For example, should I write test for
[a valid and has value, b valid and has value]
if I tested that it returns what expected for
[a valid and has value, b invalid and has value]
and
[a invalid and has value, b valid and has value]
?
Thanks!
P.S.: Maybe someone can suggest good reading on unit testing approaches?
Test Driven Development by Kent Beck is well-done and is becoming a classic (http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530)
If you wanted to be thorough to the max then yes 16 checks would be worthwhile.
It depends. Speaking personally, I'd be satisfied to test all boundary conditions. So both cases where it is true, but making one item false would make the overall result false, and all 4 false cases where making one item true would make the overall result true. But that is a judgment call, and I wouldn't fault someone who did all 16 cases.
Incidentally if you unit tested one true case and one false one, code coverage tools would say that you have 100% coverage.
If you are woried about writing 16 test cases, you can try some features like NUnit TestCase or MbUnit RowTest. Other languages/frameworks should have similar features.
This would allow you to test all 16 conditions with a single (and small) test case).
If testing seems hard, think about refactoring. I can see several approaches here. First merge isValid() and hasValue() into one method and test it separately. And why have checkObjects(a, b) testing two unrelated objects? Why can't you have checkObject(a) and checkObject(b), decreasing the exponential growth of possibilities further? Just a hint.
If you really want to test all 16 possibilities, consider some more table-ish tools, like Fitnesse (see http://fitnesse.org/FitNesse.UserGuide.FitTableStyles). Also check Parameterized JUnit runner and TestNG.
Consider the following code (from a requirement that says that 3 is special for some reason):
bool IsSpecial(int value)
if (value == 3)
return true
else
return false
I would unit test this with a couple of functions - one called TEST(3IsSpecial) that asserts that when passed 3 the function returns true and another that passes some random value other than 3 and asserts that the function returns false.
When the requirement changes and say it now becomes 3 and 20 are special, I would write another test that verifies that when called with 20 this function returns true as well. That test would fail and I would then go and update the if condition in the function.
Now, what if there are people on my team who do not believe in unit testing and they make this change. They will directly go and change the code and since my second unit test might not test for 20 (it could be randomly picking an int or have some other int hardcoded). Now my tests aren't in sync with the code. How do I ensure that when they change the code some unit test or the other fails?
I could be doing something grossly wrong here so any other techniques to get around this are also welcome.
That's a good question. As you note a Not3IsNotSpecial test picking a random non-3 value would be the traditional approach. This wouldn't catch a change in the definition of "special".
In a .NET environment you can use the new code contracts capability to write the test predicate (the postcondition) directly in the method. The static analyzer would catch the defect you proposed. For example:
Contract.Ensures(value != 3 && Contract.Result<Boolean>() == false);
I think anybody that's a TDD fan is experimenting with contracts now to see use patterns. The idea that you have tools to prove correctness is very powerful. You can even specify these predicates for an interface.
The only testing approach I've seen that would address this is Model Based Testing. The idea is similar to the contracts approach. You set up the Not3IsNotSpecial condition abstractly (e.g., IsSpecial(x => x != 3) == false)) and let a model execution environment generate concrete tests. I'm not sure but I think these environments do static analysis as well. Anyway, you let the model execution environment run continuously against your SUT. I've never used such an environment, but the concept is interesting.
Unfortunately, that specific scenario is something that is difficult to guard against. With a function like IsSpecial, it's unrealistic to test all four billion negative test cases, so, no, you're not doing something grossly wrong.
Here's what comes to me off the top of my head. Many repositories have hooks that allow you to run some process on each check-in, such as running the unit tests. It's possible to set a criterion that newly checked in code must reach some threshold of code coverage under unit tests. If the commit does not meet certain metrics, it is rejected.
I've never had to set one of these systems up, so I don't know what is involved, but I do know it's possible.
And believe me, I feel your pain. I work with people who are similarly resistant to unit testing.
One thing you need to think about is why 3 is a special character and others are not. If it is defining some aspect of your application, you can take that aspect out and make an enum out of it.
Now you can check here that this test should fail if value doesn't exist in enum. And for enum class write a test to check for possible values. If there is new possible value being added your test should fail.
So your method will become:
bool IsSpecial(int value)
if (SpecialValues.has(value))
return true
else
return false
and your SpecialValues will be an enum like:
enum SpecialValues {
Three(3), Twenty(20)
public int value;
}
and now you should write to test possible values for enum. A simple test can be to check total number of possible values and another test can be to check the possible values itself
The other point to make is that in a less contrived example:
20 might have been some valid condition to test for based on knowledge of the business domain. Writing tests in a BDD style based on knowledge of the business problem might have helped you explicitly catch it.
4 might have been a good value to test for due to its status as a boundary condition. This may have been more likely to change in the real world so would more likely show up in a full test case.
I'm reviewing a quite old project and see code like this for the second time already (C++ - like pseudocode):
if( conditionA && conditionB ) {
actionA();
actionB();
} else {
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
}
in this code conditionA evaluates to the same result on both computations and the same goes for conditionB. So the code is equivalent to just:
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
So the former variant is just twice more code for the same effect. How could such manner (I mean the former variant) of writing code be called?
This is indeed bad coding practice, but be warned that if condition A and B evaluations have any side effects (var increments, etc.) the two fragments are not equivalent.
I would call it bad code. Though I've tended to find similar constructs in project that grew without any code review being done. (Or other lax development practices).
Guys? Look at this part: ( conditionA && conditionB )
Basically, if conditionA happens to be false, then it won't evaluate conditionB.
Now, it would be a bad coding style but if conditionA and conditionB aren't just evaluating data but if there's also some code behind these conditions that change data, there could be a huge difference between both notations!
if conditionA is false then conditionA is evaluated twice and conditionB is evaluated just once.
If conditionA is true and conditionB is false, then both conditions are evaluated twice.
If both conditions are true, both are executed just once.
In the second suggestion, both conditions are executed just once... Thus, these methods are only equivalent if both methods evaluate to true.
To make things more complex, if conditionB is false then actionA could change something that would change this validation! Thus the else branch would then execute actionB too. But if both conditions evaluates to true and actionA would change the evaluation of conditionB to false, it would still execute actionB.
I tend to refer to this kind of code as: "Why make things easy when you can do it the hard way?" and consider this a design pattern. Actually, it's "Mortgage-Driven development" where code is made more complex so the main developer will be the only one to understand it, while other developers will just become confused and hopefully give up to redesign the whole thing. As a result, the original developer is required to stay just to maintain this code, which is called "Job security" and thus be able to pay his mortgage for many, many years.I wonder why something like this would be used, then realized that I use a similar structure in my own code. Similar, but not the same:
if (A&&B){
action1;
} elseif(A){
action2;
} elseif(B){
action3;
} else{action4}
In this case, every action would be the display of a different message. A message that could not be generated by just concatenating two strings. Say, it's a part of a game and A checks if you have enough energy while B checks if you have enough mass. Of you don't have enough mass AND energy, you can't build anything anymore and a critical warning needs to be displayed. If you only have energy, a warning that you have to find more mass would be enough. With only energy, your builders would need to recharge. And with both you can continue to build. Four different actions, thus this weird construction.
However, the sample in the Q shows something completely different. Basically, you'd get one message that you're out of mass and another that you're out of energy. These two messages aren't combined into a single message.
Now, in the example, if conditionA would detect energy levels and conditionB would detect mass levels then both solution would just work fine. Now, if actionA tells your builders to drop their mass and start recharging, you'd suddenly gain a little bit of mass again in your game. But if conditionB indicated that you ran out of mass, that would not be true anymore! Simply because actionA released mass again. if actionB would be the command to tell builders to start collecting mass as soon as they're able then the first solution will give all builders this command and they would start collecting mass first, then they would continue their other actions. In the second solution, no such command would be given. The builders are recharged again and start using the little mass that was just released. If this check is done every 5 minutes then those builders would e.g. recharge in one minute to be idle for 4 more minutes because they ran out of mass. In the first solution, they would immediately start collecting mass.
Yeah, it's a stupid example. Been playing Supreme Commander again. :-) Just wanted to come up with a possible scenario, but it could be improved a lot!...
It's code written by someone who doesn't know how to use a Karnaugh Map.
This is very close to the 'fizzbuzz' Design Pattern:
if( fizz && buzz ) {
printFizz();
printBuzz();
} else {
if( fizz ) {
printFizz();
}
else if( buzz ) {
printBuzz();
}
else {
printValue();
}
}
Maybe the code started life as an instance of fizzbuzz (maybe copied-n-pasted), then was refactored slightly into what you see today due to slightly different requirements, but the refactoring didn't go as far as it probably should have (boolean logic can sometimes be a bit trickier than one might think - hence fizzbuzz as an interview weed-out technique).
I would call it bad code too.
There is no best way on indenting, but there is one golden rule : choose one and stick with it.
This is "redundant" code, and yes it is bad. If some precondition must be added to the calls to actionA (assuming that the precondition can't be put into actionA itself), we now have to modify the code in 2 places, and therefore run the risk of overlooking one of them.
This is one of those times where you can feel better about deleting some lines of code, than in writing new ones.
inefficient code?
Also, could be called Paid per line
I would call it 'twice is better'. It's made like that to be sure that the runtime really understood the question ;).
(although in multi-threaded, not-safe environment, the result may differ between the two variants.)
I might call it "code I wrote last month while I was in a hurry / not focused / tired". It happens, we all make or have made these kind of mistakes. Just change it. If you want to you can try and find out who did this, hope it is not you, and give him/her feedback.
Since you said you've seen this more than once, it seems that it's more than a one-time error due to being tired. I see several reasons for someone to repeatedly come up with such code:
The code was originally different, got refactored, but whoever did this oversaw that this is redundant.
Whoever did this didn't have a good grasp of boolean logic.
(Also, there's the slight possibility that there might be more to this than what your simplified snipped shows.)
As pgast has said in a comment there is nothing wrong with this code if actionA effects conditionB (note that this is not a condition with a side effect but an action with a side effect (which you kind of expect))