I would like to code an NUnit test in Powershell.
Consider the following NUnit test in C#:
[TestCase(0)]
[TestCase(1)]
public void YabaDabaDoo(int x)
{
Assert.IsTrue(x > -1);
}
I would like to be able to express the same intent in Powershell. Obviously, the devil is in details, but there are valid scenarios when it is required to write tests in some scripting language.
Now, maybe NUnit is not the right choice, in which case what is? Better be on a par with NUnit.
I would like to code an NUnit test in Powershell.
No. differing technologies.
Powershell has Pester to test Powershell scripts, while NUnit is used to test managed code.
Pester features include a test runner, assertions, mocking and more.
Learn more about Pester at their wiki guide to get more details.
Powershell scripts can be used to run NUnit test runner and by extension tests.
For example
$ProjectDir = "."
$PackagesDir = "$ProjectDir\packages"
$OutDir = "$ProjectDir\bin\Debug"
# Install NUnit Test Runner
$nuget = "$ProjectDir\.nuget\nuget.exe"
& $nuget install NUnit.Runners -Version 2.6.2 -o $PackagesDir
# Set nunit path test runner
$nunit = "$ProjectDir\packages\NUnit.Runners.2.6.2\tools\nunit-console.exe"
#Find tests in OutDir
$tests = (Get-ChildItem $OutDir -Recurse -Include *Tests.dll)
# Run tests
& $nunit /noshadow /framework:"net-4.0" /xml:"$OutDir\Tests.nunit.xml" $tests
NUnit is as others have mentioned for managed code. I'd recommend using the Pester-module for unit testing PowerShell code. It's available on GitHub and also available by default in Windows 10. Ex:
Describe "YabaDabaDoo" {
$cases = #{x=-1},#{x=0},#{x=1}
It "<x> should be greater than -1" -TestCases $cases {
param ([int]$x)
$x | Should -BeGreaterThan -ExpectedValue -1
}
}
Output:
Describing YabaDabaDoo
[-] -1 should be greater than -1 113ms
Expected '-1' to be greater than the actual value, but got -1.
6: $x | Should -BeGreaterThan -ExpectedValue -1
[+] 0 should be greater than -1 30ms
[+] 1 should be greater than -1 15ms
Related
I'm relatively new to Haskell, so apologies in advance if my terminology is not quite correct.
I would like to implement some plain unit test for a very simple project, managed through cabal. I noticed this very similar question, but it didn't really help. This one didn't either (and it mentions tasty, see below).
I think I can accomplish this by using only HUnit - however, I admit I am a bit confused by all other "things" that guides on the net talk about:
I don't quite appreciate the difference between the interfaces exitcode-stdio-1.0 and detailed-0.9
I am not sure about the differences (or mid- and long-terms) implication of using HUnit or Quickcheck or others?
What's the role of tasty that the HUnit guide mentions.
So, I tried to leave all "additional" packages out of the equation and everything else as "default" as much as I could aside and did the following:
$ mkdir example ; mkdir example/test
$ cd example
$ cabal init
Then edited example.cabal and added this section:
Test-Suite test-example
type: exitcode-stdio-1.0
hs-source-dirs: test, app
main-is: Main.hs
build-depends: base >=4.15.1.0,
HUnit
default-language: Haskell2010
Then I created test/Main.hs with this content:
module Main where
import Test.HUnit
tests = TestList [
TestLabel "test2"
(TestCase $ assertBool "Why is this not running," False)
]
main :: IO ()
main = do
runTestTT tests
return ()
Finally, I tried to run the whole lot:
$ cabal configure --enable-tests && cabal build && cabal test
Up to date
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (additional components to build)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (ephemeral targets)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Running 1 test suites...
Test suite test-example: RUNNING...
Test suite test-example: PASS
Test suite logged to:
/home/jir/workinprogress/haskell/example/dist-newstyle/build/x86_64-linux/ghc-9.2.4/example-0.1.0.0/t/test-example/test/example-0.1.0.0-test-example.log
1 of 1 test suites (1 of 1 test cases) passed.
And the output is not what I expected.
I'm clearly doing something fundamentally wrong, but I don't know what it is.
In order for the exitcode-stdio-1.0 test type to recognize a failed suite, you need to arrange for your test suite's main function to exit with failure in case there are any test failures. Fortunately, there's a runTestTTAndExit function to handle this, so if you replace your main with:
main = runTestTTAndExit tests
it should work fine.
I'm working in a large legacy codebase with lots of broken tests making running go test ./... returning a lot of failed tests that I don't want to focus on right now.
I tried using +build which is good enough for targeting certain sub-package but it runs all of the test not configured as well. Going through all the tests right now is not an option for me.
Currently I have to do it the long way with:
go test \
-coverprofile=coverage.out \
-tags=moduleB_feature1,moduleA_feature1,moduleA_feature3 \
code_dir/projectX/moduleA/... \
code_dir/projectY/moduleB/feature1
Is there a command for me to ignore all the test without +build configured so I can use ./...?
You can run test for specific sub-package. E.g., if you have package example.org/your/module/some/package then the command
go test example.org/your/module/some/package
executes tests for some/package only.
It could be combined with selecting tests by regular expression: -run switch
go test example.org/your/module/some/package -run `Foo`
executes all test functions from the package some/package that match the regular expression Foo, such as TestFooBar(t *testing.T)
I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run
How do I have go test several/packages/... stop after the first test failure?
It takes some time to build and execute the rest of the tests, despite already having something to work with.
Go 1.10 add a new flag failfast to go test:
The new go test -failfast flag disables running additional tests after any test fails. Note that tests running in parallel with the failing test are allowed to complete.
https://golang.org/doc/go1.10
However, note this does not work across packages: https://github.com/golang/go/issues/33038
Here's a workaround:
for s in $(go list ./...); do if ! go test -failfast -v -p 1 $s; then break; fi; done
To speed-up the build phase you can run
go test -i several/packages/...
before the tests to build and install packages that are dependencies of the test.
To stop after the first failure you can use something like
go test several/packages/... | grep FAILED | head -n 1
I have a C++ project in NetBeans using generated Makefiles. I set up a job in Jenkins (continuous integration server) to run the tests configured in NetBeans. Now Jenkins runs the tests and captures their output, but it considers the build successful even when a test fails.
I'm using the Boost Unit Test Framework which of course returns a non-zero code on failure as any proper *nix program would. So I wondered why Jenkins didn't understand when a test failed. Then I found this in the generated Makefile-Debug.mk from NetBeans:
# Run Test Targets
.test-conf:
#if [ "${TEST}" = "" ]; \
then \
${TESTDIR}/TestFiles/f1 || true; \
${TESTDIR}/TestFiles/f2 || true; \
else \
./${TEST} || true; \
fi
So it seems like they deliberately ignore the return value of all tests. But this doesn't make sense, because then what are your tests testing?
I tried to find a setting in NetBeans to say "Let failing tests break the build" but didn't find anything. I also tried to find a bug in the NetBeans tracker for this but didn't see any in my brief search.
Is there any other reasonable solution? I want Jenkins to fail my build if any test fails. Right now it only fails if a test fails to build, but if it builds and fails to run, success is reported.
It turns out that NetBeans (up to version 8 at least) cannot support this. What I did to work around it is to do make build-tests rather than make test in Jenkins, followed by a loop over all the generated test files (TestFiles/f* in the build directory) to run them.
This is a major shortcoming in NetBeans' Makefile generator, as it is fundamentally incompatible with running tests outside of NetBeans itself. Thanks to #HEKTO for the link which led me to this page about writing NetBeans testing plugins: http://wiki.netbeans.org/CND69UnitTestsPluginTutotial
What that page tells you is basically that NetBeans relies on parsing the textual output of tests to determine success or failure. What it doesn't tell you is that NetBeans generates defective Makefiles which ignore critical failures in tests, including aborts, segmentation faults, assertion failures, uncaught exceptions, etc. It assumes you will use a test framework that it knows about (which is only CppUnit), or manually write magic strings at the right moments in your test programs.
I thought about taking the time to write a NetBeans unit test plugin for the Boost Unit Test Framework, but it won't help Jenkins at all: the plugins are only used when tests are run inside NetBeans itself, to display pretty status indicators.