Sonarqube Partially covered tests - unit-testing

Sonarqube test coverage report says that my c++ statement are only partially covered. Example of a very simplified function containing such statements is as below:
std::string test(int num) {
return "abc";
}
My test as follow:
TEST(TestFunc, Equal) {
std::string res = test(0);
EXPECT_EQ (res, "abc");
}
Sonarqube coverage report says that the return stmt is only partially covered by tests (1 of 2 conditions). I am wondering what is other condition that i need to test for?
I also saw the following in the report:
Condition to cover: 2
Uncovered Condition: 1
Condition Coverage: 50%
It seems like i need a test to cover the other condition but i cant figure out what that is.

After more research, this is not a Sonarqube problem. This post (and the way around it) most likely explain the root cause of my problem.
Related post: LCOV/GCOV branch coverage with C++ producing branches all over the place

Related

tcltest unit tests: how to check if constraint is active to enable code reuse

We are using tcltest to do our unit testing but we are finding it difficult to reuse code within our test suite.
We have a test that is executed multiple times for different system configurations. I created a proc which contains this test and reuse it everywhere instead of duplicating the test's code many times throughout the suite.
For example:
proc test_config { config_name} {
test $test_name {} -constraints $config_name -body {
<test body>
} -returnCodes ok
}
The problem is that I sometimes want to only test certain configurations. I pass the configuration name as a parameter to the proc as shown above, but the -constraints {} part of the test does not look up the $config_name parameter as expected. The test is always skipped unless I hard code the configuration name, but using a proc is not possible and I would need to duplicate the code everywhere just to hardcode the constraint.
Is there a way to look if the constraint is enabled in the tcltest configuration?
Something like this:
proc test_config { config_name} {
testConstraint X [expr { ::tcltest::isConstraintActive $config_name } ]
test $test_name {} -constraints X -body {
<test body>
} -returnCodes ok
}
So, is there a function in tcltest doing something like ::tcltest::isConstraintActive $config_name?
Is there a way to look if the constraint is enabled in the tcltest configuration?
Yes. The testConstraint command will do that if you don't pass in an argument to set the constraint's status:
if {[tcltest::testConstraint foo]} {
# ...
}
But don't use this to decide whether to run tests or for-a-single-test setup or cleanup. Tests should always only be turned on or off by constraints directly so that the report generated tcltest can properly track what tests were disabled and for what reasons, and each test has -setup and -cleanup options that allow for scripts to be run before and after the test if the constraints are matched.
Personally, I don't recommend putting tests inside procedures or using a variable for a test name. It works and everything, but it's confusing when you're trying to figure out what test failed and why; debugging is hard enough without adding to it. (I also find that apply is great as a way to get a procedure-like thing inside a test without losing the “have the code inspectable right there” property.)

How to test repository pattern in a good way with Laravel

So I'm quite inexperienced with testing but I'm working on it. One of the things I read was that a test should not really care how the method does what it does but check the expected outcome.
With this in mind, I'm not sure if I'm testing my repositories in a useful manner. As I read in this SO answer the way I'm doing it, is actually almost writing the code twice.
Consider the following code:
public function getUserCart($userId)
{
return $this->shoppingcarts->whereUserId($userId)->first();
}
With the following test:
public function testGetUserCart()
{
$shoppingcartMock = $this->mock('Shoppingcart');
$shoppingcartMock->shouldReceive('whereUserId')->once()->with('some id')->andReturn($shoppingcartMock);
$shoppingcartMock->shouldReceive('first')->once()->andReturn('cart');
$repo = App::make('EloquentShoppingcartRepository');
$this->assertEquals('cart', $repo->getUserCart('some id'));
}
My test passes and I have code coverage but if I were to change $this->shoppingcarts->whereUserId($userId)->first() with $this->shoppingcarts->where('user_id', $userId)->first() the test fails of course.
The code performs just the same and in my opinion a good test should not really care what exact method i'm using as long as the outcome is as expected.
My question is twofold. Is it useful to test repositories? And if so, what approach should I take?

How to mark a Google Test test-case as "expected to fail"?

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.

Unit test fails with NUnit Test Adapter but not with ReSharper in VS2012

I have a strange problem occurring when I run my unit tests in VS2012. I'm using NUnit and run them with ReSharper and there all tests are working. But when my colleagues run the tests, some of them don't have ReSharper so they are using the Test Explorer with the extension NUnit Test Adapter (Beta 3) v0.95.2 (http://visualstudiogallery.msdn.microsoft.com/6ab922d0-21c0-4f06-ab5f-4ecd1fe7175d). However with that extension some tests are failing.
The specific code that fails is the following:
public void Clear()
{
this.Items.ForEach(s => removeItem(s));
}
private bool removeItem(SequenceFlow item)
{
int i = this.Items.IndexOf(item);
if (i == -1)
return false;
this.Items.RemoveAt(i);
return true;
}
The exception is:
System.InvalidOperationException : Collection was modified; enumeration operation may not execute.
Result StackTrace:
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List`1.ForEach(Action`1 action)
Now, I'm not looking for an to answer why I get this exception, sure I can understand why it fails. But what I can't understand is why the tests fail with the Test Exporer but not when using ReSharper. Why do I get different behavior for the tests?
I used ildasm.exe to see if the code is compiled differently when testing for the two cases, but the IL-code is identical.
The tests also runs during commit on our Team City server with no errors.
Furthermore, when debugging the test I get the same exception when debugging through NUnit test adapter, but when debugging and stepping through the code with ReSharper, no exception at all.
I found that in VS2012 similar code would fail at run-time with the same error. If you used this method in an application, would it succeed?
You're functionally iterating over a collection and removing items from it while you're still in the collection - this changes the internal indexing of the collection, invalidating the addressing of the iteration. If you'd coded it as:
for(int I=0; I < Items.Count, I++)
{
removeItem(Items[I]);
}
you'd wind up with an index out of bounds error because the collection's internal indexing resets.
I can't speak to ReSharper, but I'd guess that it has a more generous run-time engine than the MS nunit engine (or, for that matter, the MS runtime engine).
I was doing something similar in an application where I tried to iterate through the collection of dependent objects on my parent and remove them. It failed with the exact error you're receiving: ultimately I went with a linq query to remove all items attached to the specified parent - the equivalent of running the SQL query DELETE FROM table WHERE parentID = parentid.

How to unit test negation when there is multiple conditions?

I have this piece of code
function aFunctionForUnitTesting(aParameter)
return (aFirstCheck(aParameter) &&
aOtherOne(aParameter) &&
aLastOne(aParameter)
);
How can I unit test this ?
My problem is the following, let's say I create this unit test :
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameter));
}
How do I know that this test is ok because of the first check (it might have failed because of the second or the third, so my function firstCheck might be bugged) ?
You need to pass a different value of your parameter, that will pass the first check and fail on the second.
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameterThatPassesFirstCheckButFailsTheSecond));
}
You don't have to write all these cases. It depends on what level of code coverage you want. You can get 'modified condition/decision coverage' in four tests:
- all params ok
- one of each param failing the test
'Multiple condition coverage' requires 8 tests here. Not sure I'd bother unless I was working for Airbus :-)