Spock returning incorrect/default value for a getEstimatedNumIterations() call - unit-testing

On the IterationInfoclass there exists a method called getEstimatedNumIterations() which, based on its javadoc, will return the total number of iterations for the ongoing execution of the owning feature. In older versions of Spock (2.0-groovy-3), this method worked as expected:
(simple, slightly absurd PoC):
def "Iteration test - #x"() {
given: "a dumb way to illustrate the problem"
println "Estimate: " + this.getSpecificationContext().getCurrentIteration().getEstimatedNumberOfIterations()
expect:
true
where:
x << [1, 2, 3]
}
prints Est: 3 (x3)
However, with the latest stable release (2.3-groovy-4) that same spec produces the following:
Est: -1 (x3)
I can see there was a decent amount of framework improvements from 2.0 -> 2.3, especially involving iteration construction. Is there any other way to get the estimated number of iterations for a feature? I am primarily using this in a few custom annotations.

This is a regression in the Spock Framework, I've created an issue.

Related

Sonarqube Partially covered tests

Sonarqube test coverage report says that my c++ statement are only partially covered. Example of a very simplified function containing such statements is as below:
std::string test(int num) {
return "abc";
}
My test as follow:
TEST(TestFunc, Equal) {
std::string res = test(0);
EXPECT_EQ (res, "abc");
}
Sonarqube coverage report says that the return stmt is only partially covered by tests (1 of 2 conditions). I am wondering what is other condition that i need to test for?
I also saw the following in the report:
Condition to cover: 2
Uncovered Condition: 1
Condition Coverage: 50%
It seems like i need a test to cover the other condition but i cant figure out what that is.
After more research, this is not a Sonarqube problem. This post (and the way around it) most likely explain the root cause of my problem.
Related post: LCOV/GCOV branch coverage with C++ producing branches all over the place

How to unit test Observable.interval with RxJS 5?

I have a very simple periodic activity which is scheduled by RxJS 5 and I'm a bit curious about how to unit test this type of code using RxJS 5:
start() {
this._subscription = Observable
.interval(this._interval, SchedulerProvider.getScheduler())
.subscribe(_ => this.doSomething());
}
stop() {
this._subscription.unsubscribe();
}
What I tried is to stub my SchedulerProvider.getScheduler function to return a new TestScheduler() during my tests (by default it just returns undefined so the production code just uses the default scheduler) - and then try to setup virtual "ticks" for the interval:
myTestSchedulerInstance.createColdObservable('xxxxxxxx', {x: 1}); // tick ten times
unitUnderTest.start();
myTestSchedulerInstance.flush();
... assert that doSomething() should be called 10 times ....
But it doesn't work. I guess createColdObservable and createHotObservable just returns a new observable based on the marble-syntax input string - but this doesn't affect my interval.
I went through the docs and I'm a bit confused now about the marble testing in RxJS 5, because the examples in the docs (and everywhere else) look like this:
in the first step: create a test scheduler
then create a cold/hot observable using this scheduler
your unit under test applies some operators on the created observable - which produces another observable
then you assert on the resulting observable using expectObservable
But my use-case is different. I don't want to return the Observable.interval, because I'm only interested in the side-effect of my subscription. Is it possible to write such a test using RxJS 5?
The only alternatives which I see right now:
use sinon's fake timers
inside start(), map the side effect to the interval and after subscribing to it, return the observable containing the side effect -> and use it in the assertion using expectObservable
But each solution seems to be a bit messy.

FakeItEasy failed assertion with randomly <ignored> param (and there is a match in the calls)

I'm trying to implement a unit test with a methodcall assertion. (MustHaveHappened)
I'm using the following code:
[Fact]
public void Should_set_setting_and_map_new_value_for_non_existing_setting()
{
//Arrange
var userSetting = new SettingDetailsBuilder().Build();
var respository = A.Fake<ISettingsRepository>();
A.CallTo(() => respository.GetUserSetting(0, 0, null)).WithAnyArguments().Returns(userSetting);
var dataretriever = new SettingsDataRetriever(respository);
//Act
var newUserSetting = dataretriever.SetUserSetting("variableName", "SomeOtherValue", 1, 1, "FST");
//Assert
A.CallTo(() => respository.GetUserSetting(1, 1, "variableName")).MustHaveHappened();
}
But I get randomly a failed test, whereby some arguments are mentioned as "ignored". However, the assertion is with exact parameters.
Error:
Assertion failed for the following call:
AlfaProNext.Repositories.Settings.ISettingsRepository.GetUserSetting(1, <Ignored>, <Ignored>)
Expected to find it at least once but found it #0 times among the calls:
1: AlfaProNext.Repositories.Settings.ISettingsRepository.Exists(varName: "variableName")
2: AlfaProNext.Repositories.Settings.ISettingsRepository.GetUserSetting(
userId: 1,
profileId: 1,
variableName: "variableName")
Does anybody knows why this is happening randomly?
This is likely due to tests being executed in parallel by XUnit 2.0 while using a FakeItEasy version older than 2.0.0-beta009. The latter includes a fix for issue 476, which made argument constraints using That and Ignored threadsafe.
If feasible, consider upgrading to the latest FakeItEasy. (You can see the latest changes at the GitHub Project.) Or turn off parallel test execution in XUnit.
After some great feedback from the FakeItEasy forum, I have my answer. Apparently the current stable version is not threadsafe and cannot handle the latest version of XUnit 2, which is running the tests in parallel.
https://github.com/FakeItEasy/FakeItEasy/issues/562
My fix was to upgrade FakeItEasy to version 2 beta 10. (alternative is to run the test in a single thread)

What unit testing frameworks are available for F#

I am looking specifically for frameworks that allow me to take advantage of unique features of the language. I am aware of FsUnit. Would you recommend something else, and why?
My own unit testing library, Unquote, takes advantage of F# quotations to allow you to write test assertions as plain, statically checked F# boolean expressions and automatically produces nice step-by-step test failure messages. For example, the following failing xUnit test
[<Fact>]
let ``demo Unquote xUnit support`` () =
test <# ([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0] #>
produces the following failure message
Test 'Module.demo Unquote xUnit support' failed:
([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0]
[4; 3; 2; 1] = [4..1]
[4; 3; 2; 1] = []
false
C:\File.fs(28,0): at Module.demo Unquote xUnit support()
FsUnit and Unquote have similar missions: to allow you to write tests in an idiomatic way, and to produce informative failure messages. But FsUnit is really just a small wrapper around NUnit Constraints, creating a DSL which hides object construction behind composable function calls. But it comes at a cost: you lose static type checking in your assertions. For example, the following is valid in FsUnit
[<Test>]
let test1 () =
1 |> should not (equal "2")
But with Unquote, you get all of F#'s static type-checking features so the equivalent assertion would not even compile, preventing us from introducing a bug in our test code
[<Test>] //yes, Unquote supports both xUnit and NUnit automatically
let test2 () =
test <# 1 <> "2" #> //simple assertions may be written more concisely, e.g. 1 <>! "2"
// ^^^
//Error 22 This expression was expected to have type int but here has type string
Also, since quotations are able to capture more information at compile time about an assertion expression, failure messages are a lot richer too. For example the failing FsUnit assertion 1 |> should not (equal 1) produces the message
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
Expected: not 1
But was: 1
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\FsUnit.fs(11,0): at FsUnit.should[a,a](FSharpFunc`2 f, a x, Object y)
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
Whereas the failing Unquote assertion 1 <>! 1 produces the following failure message (notice the cleaner stack trace too)
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
1 <> 1
false
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
And of course from my first example at the beginning of this answer, you can see just how rich and complex Unquote expressions and failure messages can get.
Another major benefit of using plain F# expressions as test assertions over the FsUnit DSL, is that it fits very well with the F# process of developing unit tests. I think a lot of F# developers start by developing and testing code with the assistance of FSI. Hence, it is very easy to go from ad-hoc FSI tests to formal tests. In fact, in addition to special support for xUnit and NUnit (though any exception-based unit testing framework is supported as well), all Unquote operators work within FSI sessions too.
I haven't yet tried Unquote, but I feel I have to mention FsCheck:
http://fscheck.codeplex.com/
This is a port of Haskells QuickCheck library, where rather than specifying what specific tests to carry out, you specify what properties about your function should hold true.
To me, this is a bit harder than using traditional tests, but once you figure out the properties, you'll have more solid tests. Do read the introduction: http://fscheck.codeplex.com/wikipage?title=QuickStart&referringTitle=Home
I'd guess a mix of FsCheck and Unquote would be ideal.
You could try my unit testing library Expecto; it's has some features you might like:
F# syntax throughout, tests as values; write plain F# to generate tests
Use the built-in Expect module, or an external lib like Unquote for assertions
Parallel tests by default
Test your Hopac code or your Async code; Expecto is async throughout
Pluggable logging and metrics via Logary Facade; easily write adapters for build systems, or use the timing mechanism for building an InfluxDB+Grafana dashboard of your tests' execution times
Built in support for BenchmarkDotNet
Build in support for FsCheck; makes it easy to build tests with generated/random data or building invariant-models of your object's/actor's state space
Hello world looks like this
open Expecto
let tests =
test "A simple test" {
let subject = "Hello World"
Expect.equal subject "Hello World" "The strings should equal"
}
[<EntryPoint>]
let main args =
runTestsWithArgs defaultConfig args tests

Implementing xunit in a new programming language

Some of us still "live" in a programming environment where unit testing has not yet been embraced. To get started, the obvious first step would be to try to implement a decent framework for unit testing, and I guess xUnit is the "standard".
So what is a good starting point for implementing xUnit in a new programming language?
BTW, since people are asking: My target environment is Visual Dataflex.
Which language is it for - there are quite a few in place already.
If this is stopping you from getting started with writing unit tests you could start out without a testing framework.
Example in C-style language:
void Main()
{
var algorithmToTest = MyUniversalQuestionSolver();
var question = Answer to { Life, Universe && Everything };
var actual = algorithmToTest(question);
var expected = 42;
if (actual != expected) Error();
// ... add a bunch of tests
}
Example in Cobol-style language:
MAIN.
COMPUTE EXPECTED_ANSWER = 42
SOLVE ANSWER_TO_EVERYTHING GIVING ACTUAL_ANSWER
SUBTRACT ACTUAL_ANSWER FROM EXPECTED_ANSWER GIVING DIFFERENCE
IF DIFFERENCE NOT.EQ 0 THEN
DISPLAY "ERROR!"
END-IF
* ... add a bunch of tests
STOP RUN
Run Main after you are finished with a changed (and possibly compile) on your code. Run main on the server whenever someone submits code to your repository.
When you get hooked, Look more for a framework or see if you possibly could factor out some of the bits from Main to your own framework.
I'd suggest that a good starting point would be to use xunit on a couple of other languages to get a feel for how this style of unit test framework works. Then you'll need to go in depth into the behaviour and start working out how to recreate that behaviour in a way that fits with your new language.
I created a decent unit test framework in VFP by basing it on the code in Test Driven Development: A Practical Guide, by David Astels. You'll get a long way by reading through the examples, understanding the techniques and translating the Java code into your language.
I found Pragmatic Unit Testing in C# with NUnit very helpful!