Basic testing functionality in SBT - unit-testing

How do I create a simple unit test for my application using SBT's test feature?
I'm hoping the answer is that I can write a single file in src/test/scala for my project that imports some special testing package from SBT which makes writing tests as easy as writing a single method.
The tutorial ExampleSbtTest seems to be doing something more complicated than what I need, and I can't find anything simpler on the SBT GoogleCode page.

Testing with SBT
No matter which version of SBT you want to use, basically you have to do the following steps:
Include your desired testing framework as test-dependency in your project configuration.
Create a dedicated testing folder within your source tree, usually src/test/scala, if it isn't present already.
As always: Write your tests, specs ...
Those basic steps are identical for the sbt 0.7 branch (that's the one from google-code) and the current sbt 0.10 branch (now developed and documented on github). However, there are minor differences how to define the testing dependencies since 0.10 provides a new quick configuration method not present in 0.7.
Defining the dependency for SBT 0.7
Here is how you create a basic test (based on scalacheck) with sbt 0.7. Create a new sbt 0.7 project by calling sbt in an empty folder. Change into the automatically created project folder and create a new build folder
# cd [your-project-root]/project
# mkdir build
change into the newly created build folder and create your first project build file Project.scala with the follwing content:
class Project(info: ProjectInfo) extends DefaultProject(info) {
val scalacheck = "org.scala-tools.testing" %% "scalacheck" % "1.9" % "test"
}
Since for 0.7 the testing folder is created automatically you can then start to write your first test right away. Step to the paragraph "Create a simple test".
Defining the dependency for SBT 0.10
For 0.10 one can use the sbt console to add the dependency. Just start sbt in your project directory and enter the following commands:
set libraryDependencies += "org.scala-tools.testing" %% "scalacheck" % "1.9" % "test"
session save
You can then close the sbt console and have a look at your projects build.sbt file. As you can easily spot the above libraryDependencies line was added to your projects quick configuration.
Since 0.10 doesn't create the source folders automatically. You have to create the testing folder on your own:
# cd [project-root]
# mkdir -p src/test/scala
That's all it takes to get started with 0.10. Moreover, the documentation about testing with 0.10 is far more detailed then the old one. See the testing wiki page for further details.
Create a simple scalacheck test
Create the following test file src/test/scala/StringSpecification.scala (taken from the scalacheck homepage):
import org.scalacheck._
object StringSpecification extends Properties("String") {
property("startsWith") = Prop.forAll((a: String, b: String) => (a+b).startsWith(a))
property("endsWith") = Prop.forAll((a: String, b: String) => (a+b).endsWith(b))
// Is this really always true?
property("concat") = Prop.forAll((a: String, b: String) =>
(a+b).length > a.length && (a+b).length > b.length
)
property("substring") = Prop.forAll((a: String, b: String) =>
(a+b).substring(a.length) == b
)
property("substring") = Prop.forAll((a: String, b: String, c: String) =>
(a+b+c).substring(a.length, a.length+b.length) == b
)
}
As already indicated this basic check will fail for the "concat" specification, but that are the basic testing steps needed to get started with testing and sbt. Just adapt the included dependency if you want to use another testing framework.
Run your tests
To run your test, open the sbt console and type
> test
That will run all tests present in your src/test tree no matter if those are java or scala based tests. So you can easily reuse your existing java unit tests and convert them step by step to scala.

Related

Generate test results using xunit in VSO build task for asp.net core app

I have this build :
It works fine. The only issue is that the Test Results are overridden. So I actually end up with the test results for the last test project executed.
This is executed by build engine;
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Services.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
C:\Program Files\dotnet\dotnet.exe test C:/agent/_work/4/s/test/Web.UnitTests/project.json --configuration release -xml ./TEST-tle.xml
What could help:
1) having "dotnet test" generate XML output file - did not find a way how to do that
2) Use a variable for -xml output file in Build Task. That variable could be a random string/number or just a project name being tested - like what Build engine feeds to "dotnet.exe test". No way how to do that.
Any ideas? Thanks.
I think that, although you're running the task against all of the projects in one go, as the .Net Core (Preview) task doesn't have a working directory, that the test results are being generated at solution root (or similar) and done for each project in turn.
I set mine up using simple command line tasks...
Tool: dotnet
Arguments: test -xml testresults.xml
Working folder: {insert the folder for the project to test here}
These work fine but I have one set up for each project. You could try creating a task for each library and adding the full path to the test results argument (or name them appropriately as starain suggested).
This feels like a minor bug to me.
Based on my test, it doesn’t recognize the date variable as Build Number.
To deal with this issue, you can add another .Net Core (Test) step to run xunit test with different result file.
For example:

Adding unit tests to a F# project in VSCode

I'm using VSCode and the Ionide suite of packages to create a console application in F#. I need to add unit tests to the application so that when I ctrl+shift+p FAKE: Build the project, the tests are run during the build process.
I've created a dummy project in Github as an example.
Initially, the test dir was not there. I created the test dir and into that folder created a second project TestProj.Test (in hindsight, I should have used more descriptive names) for testing purposes. I added the .fsproj file from TestProj to this project so that I could reference the SimpleFunctions.fs. NUnit.Framework and FsUnit are added to the TestProj.Test. Test.fs contains two simple tests.
I intentionally created the TestProj.Test as an F# library because I read on SO that the testing project needed to be a library rather than a console app.
I added lines 9, 31-37, and 47 to the default build.fsx file that comes from Ionide.. However, when I build the whole project (i.e., TestProj), the build fails and I get the following error:
1) System.Exception: NUnit: cannot run tests (the assembly list is empty).
at Fake.NUnitSequential.NUnit(FSharpFunc`2 setParams, IEnumerable`1 assemblies) in C:\code\fake\src\app\FakeLib\UnitTest\NUnit\Sequential.fs:line 22
at FSI_0005.Build.clo#31-3.Invoke(Unit _arg3)
at Fake.TargetHelper.runSingleTarget(TargetTemplate`1 target) in C:\code\fake\src\app\FakeLib\TargetHelper.fs:line 492
Line 22 of the Sequential.fs suggests that assemblies is empty.
What am I doing wrong? How should I set up the build.fsx file so that the tests in TestProj.test run successfully? Alternatively, is there something wrong with the Tests.fs file in TestProj.Test? This seems particularly difficult; is there an easier way to include tests that run automatically with VSCode, Iondide, and F#?
There are a few issues in your project:
trying to test before build "Clean" ==> "Test" ==> "Build" ==> "Deploy"
=> change target dependencies to "Clean" ==> "Build" ==> "Test" ==> "Deploy"
separate paket configuration for test (paket.dependencies, paket.lock in test subfolder) which leads to inconsistent versions of referenced dependencies
=> remove paket.dependencies and paket.lock from test
poisonous mix of NUnit versions
=> remove explicit references to NUnit.Framework from paket.dependencies and run paket.exe install
invalid type extension in test project
=> change to type Test() or delete useless file
building creates output of all projects (and not just src/app) in ./build but tests look for DLLs in ./test
=> change test file pattern to buildDir + "**/*.Test.dll"
if you want to use NUnit3
=> open Fake.Testing and use NUnit3 instead of NUnit
finally, you should commit paket.bootstrapper.exe
I recommend you either use a predefined template or start small and make sure you understand each step and check that it is working as expected. Once you've run over the point of a non-working solution it is extremely hard to get back on track.

WebStorm run all dart unit tests

In WebStorm 11 I want to create a run configuration which runs all dart tests in my project.
However there is no option to do this in the "Dart Test" configuration template. The only options are:
Test Kind: All in file, Test group, single test
Test file: must point to a .dart file, otherwise I get "Dart file is not found"
VM Options (text input)
If I point WebStorm to a single test file this command gets executed in the test window:
C:\path\to\dart\bin\dart.exe --ignore-unrecognized-flags --checked --trace_service_pause_events file:\\\C:\path\to\dart\bin\snapshots\pub.dart.snapshot run test:test -r json C:/path/to/project/test/someclass_test.dart
I don't want to create a run configuration for every unit test class I write, there must be a better way.
Currently I prefer to navigate to the project directory and just run
pub run test:test
This runs all tests which live in files ending with _test.dart which is perfectly what I want. More info here: https://github.com/dart-lang/test#running-tests
Is there no such option in WebStorm for dart developers?
Accordingly to WEB-14747 ticket this functionality is already implemented for the next major version.
You can try latest EAP build of WebStorm v12 here.
I guess that's currently not supported.
The feature to run tests this way is quite new anyway.
If you think this feature is important, lease create a feature request in https://youtrack.jetbrains.com/issues/WEB

aggregating gradle multiproject test results using TestReport

I have a project structure that looks like the below. I want to use the TestReport functionality in Gradle to aggregate all the test results to a single directory.
Then I can access all the test results through a single index.html file for ALL subprojects.
How can I accomplish this?
.
|--ProjectA
|--src/test/...
|--build
|--reports
|--tests
|--index.html (testresults)
|--..
|--..
|--ProjectB
|--src/test/...
|--build
|--reports
|--tests
|--index.html (testresults)
|--..
|--..
From Example 4. Creating a unit test report for subprojects in the Gradle User Guide:
subprojects {
apply plugin: 'java'
// Disable the test report for the individual test task
test {
reports.html.enabled = false
}
}
task testReport(type: TestReport) {
destinationDir = file("$buildDir/reports/allTests")
// Include the results from the `test` task in all subprojects
reportOn subprojects*.test
}
Fully working sample is available from samples/testing/testReport in the full Gradle distribution.
In addition to the subprojects block and testReport task suggested by #peter-niederwieser above, I would add another line to the build below those:
tasks('test').finalizedBy(testReport)
That way if you run gradle test (or even gradle build), the testReport task will run after the subproject tests complete. Note that you have to use tasks('test') rather than just test.finalizedBy(...) because the test task doesn't exist in the root project.
If using kotlin Gradle DSL
val testReport = tasks.register<TestReport>("testReport") {
destinationDir = file("$buildDir/reports/tests/test")
reportOn(subprojects.map { it.tasks.findByPath("test") })
subprojects {
tasks.withType<Test> {
useJUnitPlatform()
finalizedBy(testReport)
ignoreFailures = true
testLogging {
events("passed", "skipped", "failed")
}
}
}
And execute gradle testReport. Source How to generate an aggregated test report for all Gradle subprojects
I am posting updated answer on this topic. I am using Gradle 7.5.1.
TestReport task
In short I'm using following script to set up test aggregation form subprojects (based on #Peter's answer):
subprojects {
apply plugin: 'java'
}
task testReport(type: TestReport) {
destinationDir = file("$buildDir/reports/allTests")
// Include the results from the `test` task in all subprojects
testResults.from = subprojects*.test
}
Note that reportOn method is "deprecated" or will be soon and replaced with testResults, while at the same time testResults is still incubating as of 7.5.1.
I got following warning in IDE
The TestReport.reportOn(Object...) method has been deprecated. This is scheduled to be removed in Gradle 8.0.
Hint: subproject*.test is example of star dot notation in groovy that invokes test task on a list of subprojects. Equally would be invocation of subprojects.collect{it.test}
TestReport#reportOn (Gradle API documentation)
TestReport#testResults (Gradle API documentation)
reportOn replacement for gradle 8 (Gradle Forum)
test-report-aggregation plugin
There is also alternative option for aggregating tests (Since Gradle 7.4). One can apply test-report-aggregation plugin.
If your projects already apply java plugin, this means they will come with jvm-test-suite, all you have to do is apply the plugin.
plugins {
id 'test-report-aggregation'
}
Then you will be able to invoke test reports through testSuiteAggregateTestReport task. Personally didn't use the plugin, but I think it makes sense to use it if you have multiple test suites configured with jvm-test-suite.
Example project can be found in https://github.com/gradle-samples/Aggregating-test-results-using-a-standalone-utility-project-Groovy
For 'connectedAndroidTest's there is a approach published by google.(https://developer.android.com/studio/test/command-line.html#RunTestsDevice (Multi-module reports section))
Add the 'android-reporting' Plugin to your projects build.gradle.
apply plugin: 'android-reporting'
Execute the android tests with additional 'mergeAndroidReports' argument. It will merge all test results of the project modules into one report.
./gradlew connectedAndroidTest mergeAndroidReports
FYI, I've solved this problem using the following subprojects config in my root project build.gradle file. This way no extra tasks are needed.
Note: this places each module's output in its own reports/<module_name> folder, so subproject builds don't overwrite each other's results.
subprojects {
// Combine all build results
java {
reporting.baseDir = "${rootProject.buildDir.path}/reports/${project.name}"
}
}
For a default Gradle project, this would result in a folder structure like
build/reports/module_a/tests/test/index.html
build/reports/module_b/tests/test/index.html
build/reports/module_c/tests/test/index.html

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite