How do you create unit tests in F#? I typically use the UnitTest portion of Visual Studio with a [TestClass] and [TestMethod] attributes and use the Test View to run these. I know I can just create a script file and run these, but I like the way that it is currently handled.
Check out fscheck. It's a port of Haskell's Quickcheck. Fscheck allows you to specify properties a function must satisfy which it will then verify against a "large number of randomly generated cases".
It's something you can't easily do with an imperative language like C#.
I'd rather use FsUnit or FsTest to write tests in F#, it feels more natural than OO xUnit style tests.
EDIT 2014: I now consider FsUnit/FsTest to be mostly useless syntax sugar. And "more natural than OO" doesn't mean absolutely anything. A few months ago I wrote my current thoughts on testing here (I recommend reading the entire thread).
In VS2013 you can use the below.
open Microsoft.VisualStudio.TestTools.UnitTesting
[<TestClass>]
type testrun() =
[<TestInitialize>]
member x.setup() =
//your setup code
[<TestMethod>]
member x.yourTestName() =
//your test code
Hint: If you are looking for UI unit testing then you can use this setup with Canopy.
I use a combination of xUnit.net, TestDriven.Net (Visual Studio Add-in for running tests, free for "students, open source developers and trial users"), and my own open source Unquote library (which also works with NUnit and any other exception-based assertion framework). This has worked out great and getting started is really easy:
Download and install TestDriven.Net
Download xUnit.net, unzip to any location and run xunit.installer.exe to integrate with TestDriven.Net
Download Unquote, unzip to any location
Create a project within your solution for unit tests
Add references to xunit.dll and Unquote.dll (from unzipped downloads) in your unit test project
The following is a simple example of a .fs file in the unit test project containing xUnit.net / Unquote style unit tests.
module Tests
open Swensen.Unquote
open Xunit
[<Fact>]
let ``description of first unit test`` () =
test <# (11 + 3) / 2 = String.length ("hello world".Substring(4, 5)) #>
[<Fact>]
let ``description of second unit test`` () =
let x = List.rev [1;2;3;4]
x =? [4;3;1;2]
Run all the unit tests in the project by right-clicking the project in the solution explorer and selecting Run Test(s). Both of the previous example tests will fail with the following printed to the Visual Studio Output window:
------ Test started: Assembly: Tests.dll ------
Test 'Tests.description of second unit test' failed:
[4; 3; 2; 1] = [4; 3; 1; 2]
false
C:\Solution\Project\Tests.fs(12,0): at Tests.description of second unit test()
Test 'Tests.description of first unit test' failed:
(11 + 3) / 2 = String.length ("hello world".Substring(4, 5))
14 / 2 = String.length "o wor"
7 = 5
false
C:\Solution\Project\Tests.fs(7,0): at Tests.description of first unit test()
0 passed, 2 failed, 0 skipped, took 1.09 seconds (xUnit.net 1.7.0 build 1540).
You might want to try NaturalSpec. It's a F# UnitTest-Framework on top of NUnit.
Try XUnit.net
As of version 2.5, NUnit allows you to use static members as tests. Also, the class-level TestFixtureAttribute is only necessary for generic or classes with non-default constructors. NUnit also has a backward-compatible convention that a test member may start with the word "test" instead of using TestAttribute, so you can almost write idiomatic F# with NUnit > 2.5.
Update
You can see some test examples without the TestFixtureAttribute in the Cashel library. I continued using the TestAttribute since it appears few test runners correctly pick up tests when it is not present, so that part of the NUnit post may be incorrect or at least misleading.
Related
I've inherited a codebase which has had some bad check-ins -- some of the unit tests are completely hanging and I can't run the entire unit test suite because it will always get stuck on specific tests. -- I would like to take an inventory of those tests that are now hanging.
What's the right way to set a global timeout on all of my tests such that each one is timeboxed to a specific amount of time. (i.e. if I set it to 1 minute, and a test takes 61 seconds, that test is automatically aborted and marked as failed? -- The test runner should then move on to the next test immediately.)
I'm using Visual Studio 2015 Update 1, NUnit 2.6.4, and the NUnit 2.x Test Adapter for Visual Studio.
I believe it is timeout that you want to use here.
E.g.
[Test, Timeout(2000)]
public void PotentiallyLongRunningTest()
{
...
}
The NUnit documentation seems to indicate it can be set at an assembly level. In AssemblyInfo.cs:
First:
using NUnit.Framework;
Then:
[assembly: Timeout(1000)]
When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.
I have about 300 unit tests for an assembly which is part of a solution I originally started under VS2010. Numerous tests used the Moles framework provided by Micrsoft, but after upgrading to VS2012 (Update 2) I wanted to change the tests to use the officially supplied Fakes framework.
I updated the corresponding tests accordingly, which usually only involved creating a ShimsContext and some minor changes to the code:
Before
[TestMethod]
[HostType( "Moles" )]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
// Arrange
...
MGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
...
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
After
[TestMethod]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
using( ShimsContext.Create() )
{
// Arrange
...
ShimGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
}
I've got different test classes in my test project, and when I run the tests I get arbitrary erros which I cannot explain, e.g.:
Run tests for one class in Release mode => 21 tests fail / 15 pass
Run tests for same class in Debug mode => 2 tests fail / 34 pass
Run tests for same class again in Release mode => 2 tests fail / 34 pass
Run all tests in the project => 21 tests fail / 15 pass (for the class mentioned above)
Same behaviour for a colleague on his system. The error messages are always TypeLoadExceptions such as
Test method ... threw exception: System.TypeLoadException: Could not load type 'System.DirectoryServices.Fakes.ShimDirectorySearcher' in the assembly 'System.DirectoryServices.4.0.0.0.Fakes,Version=4.0.0.0, Culture=neutral, PublicKeyToken=..."
In VS2012 itself the source code editor doesn't show any errors, Intellisense works as expected, mouse tooltips over e.g. ShimDirectorySearcher show where it is located etc. Furthermore, when I open the Fakes assembly that's being generated (e.g. System.DirectoryServices.4.0.0.0.Fakes.dll) with .NET Reflector, the type shown in the error message exists.
All the tests worked fine (in Debug and Release mode as well) before we switched from VS2010 to VS2012, but now we don't have a clue what's wrong here. Why does the result change in the ways described above? Why do we get TypeLoadExceptions even though the types do exist?
Unfortunately there is hardly any help available from Micrsoft or on the internet.
I don't quite understand why having the old .testsettings file from VS2010 is such a problem, but deleting it and adding a .runsettings file as suggested by MSDN did the job for me.
All problems were solved:
All unit tests run (again) without problems
Arbitrary combinations of tests run (again) without problems
I can debug tests using Fakes (before I used to get test instrumentalisation errors)
Hope this helps others who run into problems, there doesn't seem to be too much information about Fakes yet.
One more thing regarding Code Coverage: This works (without having to configure any test settings) via menu Test => Analyze Code Coverage. For TFS build definitions you can enable code coverage for the build by choosing Process => Basic => Automated Tests => 1. Test Source. Now click into the corresponding text field and then on the ... button that is (only) shown when you click into the text field. Now choose Visual Studio Test Runner in the Test runner ComboBox. Now you can also choose Enable Code Coverage from the options.
I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}
I am looking specifically for frameworks that allow me to take advantage of unique features of the language. I am aware of FsUnit. Would you recommend something else, and why?
My own unit testing library, Unquote, takes advantage of F# quotations to allow you to write test assertions as plain, statically checked F# boolean expressions and automatically produces nice step-by-step test failure messages. For example, the following failing xUnit test
[<Fact>]
let ``demo Unquote xUnit support`` () =
test <# ([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0] #>
produces the following failure message
Test 'Module.demo Unquote xUnit support' failed:
([3; 2; 1; 0] |> List.map ((+) 1)) = [1 + 3..1 + 0]
[4; 3; 2; 1] = [4..1]
[4; 3; 2; 1] = []
false
C:\File.fs(28,0): at Module.demo Unquote xUnit support()
FsUnit and Unquote have similar missions: to allow you to write tests in an idiomatic way, and to produce informative failure messages. But FsUnit is really just a small wrapper around NUnit Constraints, creating a DSL which hides object construction behind composable function calls. But it comes at a cost: you lose static type checking in your assertions. For example, the following is valid in FsUnit
[<Test>]
let test1 () =
1 |> should not (equal "2")
But with Unquote, you get all of F#'s static type-checking features so the equivalent assertion would not even compile, preventing us from introducing a bug in our test code
[<Test>] //yes, Unquote supports both xUnit and NUnit automatically
let test2 () =
test <# 1 <> "2" #> //simple assertions may be written more concisely, e.g. 1 <>! "2"
// ^^^
//Error 22 This expression was expected to have type int but here has type string
Also, since quotations are able to capture more information at compile time about an assertion expression, failure messages are a lot richer too. For example the failing FsUnit assertion 1 |> should not (equal 1) produces the message
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
Expected: not 1
But was: 1
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\FsUnit.fs(11,0): at FsUnit.should[a,a](FSharpFunc`2 f, a x, Object y)
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
Whereas the failing Unquote assertion 1 <>! 1 produces the following failure message (notice the cleaner stack trace too)
Test 'Test.Swensen.Unquote.VerifyNunitSupport.test1' failed:
1 <> 1
false
C:\Users\Stephen\Documents\Visual Studio 2010\Projects\Unquote\VerifyNunitSupport\VerifyNunitSupport.fs(29,0): at Test.Swensen.Unquote.VerifyNunitSupport.test1()
And of course from my first example at the beginning of this answer, you can see just how rich and complex Unquote expressions and failure messages can get.
Another major benefit of using plain F# expressions as test assertions over the FsUnit DSL, is that it fits very well with the F# process of developing unit tests. I think a lot of F# developers start by developing and testing code with the assistance of FSI. Hence, it is very easy to go from ad-hoc FSI tests to formal tests. In fact, in addition to special support for xUnit and NUnit (though any exception-based unit testing framework is supported as well), all Unquote operators work within FSI sessions too.
I haven't yet tried Unquote, but I feel I have to mention FsCheck:
http://fscheck.codeplex.com/
This is a port of Haskells QuickCheck library, where rather than specifying what specific tests to carry out, you specify what properties about your function should hold true.
To me, this is a bit harder than using traditional tests, but once you figure out the properties, you'll have more solid tests. Do read the introduction: http://fscheck.codeplex.com/wikipage?title=QuickStart&referringTitle=Home
I'd guess a mix of FsCheck and Unquote would be ideal.
You could try my unit testing library Expecto; it's has some features you might like:
F# syntax throughout, tests as values; write plain F# to generate tests
Use the built-in Expect module, or an external lib like Unquote for assertions
Parallel tests by default
Test your Hopac code or your Async code; Expecto is async throughout
Pluggable logging and metrics via Logary Facade; easily write adapters for build systems, or use the timing mechanism for building an InfluxDB+Grafana dashboard of your tests' execution times
Built in support for BenchmarkDotNet
Build in support for FsCheck; makes it easy to build tests with generated/random data or building invariant-models of your object's/actor's state space
Hello world looks like this
open Expecto
let tests =
test "A simple test" {
let subject = "Hello World"
Expect.equal subject "Hello World" "The strings should equal"
}
[<EntryPoint>]
let main args =
runTestsWithArgs defaultConfig args tests