PEX: How would you test an equality method in pex? - unit-testing

So i'm here playing with PEX, it seems like a great idea.
However I am having a few problems, such as I have no way to test an equals method using parameter unit tests.
Maybe there is no way, maybe its a technique i haven't figured out yet.
Someone must have a decent idea.
If i was doing this in moq for instance, I would ensure that all the properties on both objects are read and do the comparisons myself to verify them. however I do not see how to use this approach with parametarised tests.
the problem is that I need to verify that method calls are made and properties are set / read in my business logic. I have no idea how to do this in PEX and there isnt really a massive amount of documentation out there.

There are some basic properties you can check that are related to the mathematical definition of equality:
does not crash: a == b never throws an exception
symmetric: (a == b) == (b == a)
reflexive: (a == a) == true
transitivity: (a == b) && (b == c) ==> a == c
given Func f, a == b ==> f(a) == f(b)
All of those are nice but definitely don't guarantee you that the equality works. but some point you will have specify as assertions what equality means for you. For example, that values of Property P should be equal, etc... Ultimately, you will end up with a second specification of the equality as tests.
Things get more interresting when you investiate the relationship with GetHashCode:
a.GetHashCode() !+ b.GetHashCode() ==> a != b
idempotent: a.GetHashCode() == a.GetHashCode()

Related

Test cases needed for 100% coverage

I'm writing UT for this if condition:
if (a != null && a.x != null && a.y != null) {
//do things
}
Sonar says that it needs 6 cases to be covered 100% and my code already covered 4.
I want to now all the test cases here (6) to get 100% coverage
Based on the code that you have given I can see one test case for each of the following conditions -
a==null
a!=null && a.x==null
a!=null && a.y==null
a.x=valid value
a.y=valid value
The '//do things' part may contain logical decisions which may require additional tests. This is where probably the 6th test case is being identified by sonar.
Sonarqube defines "condition coverage" like so...
Condition coverage (branch_coverage): On each line of code containing some boolean expressions, the condition coverage answers the following question: 'Has each boolean expression been evaluated both to true and to false?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.
There's two problems.
Condition coverage and branch coverage are different. Your code has just two branches.
You have to test all combinations in a condition.
All they seem to care is that each part has been true and false at least once: a is null, a is not null, a.x is null, a.x is not null, a.y is null, a.y is not null. Maybe that's how they got six?
But that is not condition coverage. You can do it in two passes, everything true, and everything false.
You need to test every combination of a, a.x, and a.y being null and not null. This would be 2^3 or 8.
However, a false will short circuit the && operator so there's no need to go further once you hit a false.
a != null
a.x != null
a.y != null
F
any
any
T
F
any
T
T
F
T
T
T
That's four.

Is there a sensible scenario, where the == operator is not the negation of the !=

As a follow-up to my previous question (Does overriding the operator == for a class automatically override the operator !=),
is there a scenario, where you would override the == and != operators to not be the negation of each other,
where the negation would be defined as:
(!(a == b)) == (a != b)
or
(a == b) == (!(a != b))
No, in a broad sense. Stepanov would say that if you don’t follow this rule of logic then you get what you deserve. That is why in C++20 they will be by default defined by each other just like he wanted.
In practice, yes, there are at least two scenarios.
NaN (not-a-number) in IEEE floating point numbers. Two NaN are different and not not equal at the same time.
As noted in the comment. The logical inconsistency is not exactly between == and !=, but it is related: you can do a = b; assert( a != b); or have assert( a != a ).
Expression constructs, aka expression templates, aka DSL. == and != create two independent expressions. Logically, they should be still be opposites but not in the sense that they are simple Boolean values. Take a look at Boost.Phoenix or Boost.Spirit. These libraries wouldn’t be possible if the language forces to return bools for these operations and one as the negation of the other.
In both cases you can reinterpret the situation to "restore" the logical rule. For example 1) you can say that once there is a NaN in your system the program is not in a logical state anymore. 2) you can say that the "expression" a != b should be also generated by !(a == b) even if they are not immediately bools.
So, even if the language lets you, you shouldn’t play with the rules of logic.
There is also the myth that supposedly sometimes checking for inequality should be faster than checking for equality or vise versa, as an excuse to implement them separately which can lead to logical inconsistency.
This is nonsense given the short circuiting of logical operations that are fundamental to C++ and should be of any system built on top of C++.
This happens all the time when working with databases, where a NULL value fails every comparison.
So, if you're implementing objects that model data that comes from a database, and you have an object that represents a NULL value, comparing something for equality with the NULL value will be false. Comparing it for inequality with a NULL value will also be false. Every kind of comparison will be false. And, the icing on the cake: comparing a NULL value for equality with another NULL value is also false.
Any scenarios where people decided to use === (triple equal) operator to check for equality indicate that if you assume
(!(a === b)) == (a != b)
then your == is not a negation of !=.
Note that I am not implying that there is a === operator in C++! I am illustrating a scenario, as was asked in the question.

clang-tidy check to require parentheses around compound expressions

I would like to catch cases like this:
if(a == 2 && b == 3)
and convert them to:
if((a == 2) && (b == 3))
I didn't see anything that sounded like this here - is there a way to enable this?
There is no clang-tidy check that would do this transformation. The reason probably being that there is nothing wrong with the code you want to transform.
I don't even think that this transformation is something clang-tidy is intended for since this is just a question of coding style. Nowhere did I find a guideline that would prefer the first style over the second or vice versa.
You can write your own check but I don't think it's worth it. The only thing you can gain here is readability but even that is debatable at best.

Is there any reason for using if(1 || !Foo())?

I read some legacy code:
if ( 1 || !Foo() )
Is there any seen reason why not to write:
if ( !Foo() )
The two are not the same. The first will never evaluate Foo() because the 1 short-circuits the ||.
Why it's done - probably someone wanted to force entry in the then branch for debugging purposes and left it there. It could also be that this was written before source control, so they didn't want the code to be lost, rather just bypassed for now.
if (1 || !Foo() ) will be always satisfied. !Foo() will not even be reached because of short-circuits evaluation.
This happens when you want to make sure that the code below the if will be executed, but you don't want to remove the real condition in it, probably for debug purposes.
Additional information that might help you:
if(a && b) - if a is false, b won't be checked.
if(a && b) - if a is true, b will be checked, because if it's false, the expression will be false.
if(a || b) - if a is true, b won't be checked, because this is true anyway.
if(a || b) - if a is false, b will be checked, because if b is true then it'll be true.
It's highly recommended to have a macro for this purpose, say DEBUG_ON 1, that will make it easier to understand what the programmer means, and not to have magic numbers in the code (Thanks #grigeshchauhan).
1 || condition
is always true, regardless whether the condition is true or not. In this case, the condition is never even being evaluated. The following code:
int c = 5;
if (1 || c++){}
printf("%d", c);
outputs 5 since c is never incremented, however if you changed 1 to 0, the c++ would be actually called, making the output 6.
A usual practical usage of this is in the situation when you want to test some piece of code that is being invoked when the condition that evaluates to true only seldom is met:
if (1 || condition ) {
// code I want to test
}
This way condition will never be evaluated and therefore // code I want to test always invoked. However it is definitely not the same as:
if (condition) { ...
which is a statement where condition will actually be evaluated (and in your case Foo will be called)
The question was answered properly - the difference is the right side of the or operation is short-circuited, suggesting this is debug code to force entry into the if block.
But in the interest of best practices, at least my rough stab at a best practice, I'd suggest alternatives, in order of increasing preference (best is last):
note: noticed after I coded examples this was a C++ question, examples are C#. Hopefully you can translate. If anyone needs me to, just post a comment.
In-line comment:
if (1 /*condition*/) //temporary debug
Out-of-line comment:
//if(condition)
if(true) //temporary debug
Name-Indicative Function
//in some general-use container
bool ForceConditionForDebug(bool forcedResult, string IgnoredResult)
{
#if DEBUG
Debug.WriteLine(
string.Format(
"Conditional {0} forced to {1} for debug purposes",
IgnoredResult,
forcedResult));
return forcedResult;
#else
#if ALLOW_DEBUG_CODE_IN_RELEASE
return forcedResult;
#else
throw new ApplicationException("Debug code detected in release mode");
#endif
#endif
}
//Where used
if(ForceConditionForDebug(true, "condition"))...
//Our case
if(ForceConditionForDebug(true, "!Foo()"))...
And if you wanted a really robust solution, you could add a repository rule to source control to reject any checked in code that called ForceConditionForDebug. This code should never have been written that way because it obviously doesn't communicate intent. It never should have been checked in (or have been allowed to be checked in) (source control? peer review?) And it should definitely never be allowed to execute in production in its current form.

Please suggest an approach to unit testing a simple function

Suppose I have the following function (pseudocode):
bool checkObjects(a, b)
{
if ((a.isValid() && (a.hasValue()) ||
(b.isValid() && (b.hasValue()))
{
return true;
}
return false;
}
Which tests should I write to be able to claim that it's 100% covered?
There are total 16 possible input combinations. Should I write 16 test cases, or should I try to act smart and omit some test cases?
For example, should I write test for
[a valid and has value, b valid and has value]
if I tested that it returns what expected for
[a valid and has value, b invalid and has value]
and
[a invalid and has value, b valid and has value]
?
Thanks!
P.S.: Maybe someone can suggest good reading on unit testing approaches?
Test Driven Development by Kent Beck is well-done and is becoming a classic (http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530)
If you wanted to be thorough to the max then yes 16 checks would be worthwhile.
It depends. Speaking personally, I'd be satisfied to test all boundary conditions. So both cases where it is true, but making one item false would make the overall result false, and all 4 false cases where making one item true would make the overall result true. But that is a judgment call, and I wouldn't fault someone who did all 16 cases.
Incidentally if you unit tested one true case and one false one, code coverage tools would say that you have 100% coverage.
If you are woried about writing 16 test cases, you can try some features like NUnit TestCase or MbUnit RowTest. Other languages/frameworks should have similar features.
This would allow you to test all 16 conditions with a single (and small) test case).
If testing seems hard, think about refactoring. I can see several approaches here. First merge isValid() and hasValue() into one method and test it separately. And why have checkObjects(a, b) testing two unrelated objects? Why can't you have checkObject(a) and checkObject(b), decreasing the exponential growth of possibilities further? Just a hint.
If you really want to test all 16 possibilities, consider some more table-ish tools, like Fitnesse (see http://fitnesse.org/FitNesse.UserGuide.FitTableStyles). Also check Parameterized JUnit runner and TestNG.