I have a YAML-pipeline that builds my code and runs some tests. My code-base is pretty huge and full of weird legacy-code that fails my unit-tests. This is why I want to track only new errors within my code, while those legacy-errors should be ignored.
- script: "nunit3-console.exe" MyAssembly.dll
displayName: 'Run unit-tests'
failOnStderr: false
continueOnError: true
When executing that script I get a partially successful build, because the exit-code of nunit is 5:
As I want only new errors to make the build fail, I also implemented a quality-gate:
- task: BuildQualityChecks#8
displayName: 'Check for test-errors'
inputs:
checkWarnings: true
warningFailOption: 'build'
warningTaskFilters: '/^Run unit-tests$/i'
warningFilters: |
/\d+\) Error :/i
/\d+\) Failed :/i
However the entire build stays "partially successfull" because of the previous task. Is there any way to ignore the outcome of the Run unit-tests-task, as I manage them within the quality-gate ayway?
You can set the exit-code for your script-task:
- script: |
"nunit3-console.exe" MyAssembly.dll
exit 0
displayName: 'Run unit-tests'
failOnStderr: false
Now that task allways succeeds. Notice that I also ommited the continueOnError-property, as the task never produced any error.
You can do by calling it from python.
- script: python -c "import os; os.system('nunit3-console.exe MyAssembly.dll'); print('FINISHED')"
displayName: 'Run unit-tests'
failOnStderr: false
continueOnError: true
This task just succeeds, also when there are errors.
Related
When running unit tests thru PhpStorm, the output of function codecept_debug is ignored.
I've set my Test runner options to:
--colors --debug -v
In my codeception.yml I also have:
settings:
bootstrap: _bootstrap.php
colors: true
debug: true
memory_limit: 1024M
On command line the output of codecept_debug is displayed.
Not implemented yet: https://youtrack.jetbrains.com/issue/WI-36233. Please vote/comment for the issue
At the moment this is still unsupported by PHPStorm however you can work around this by:
changing your codecept_debug( $variable ) calls to echo print_r( $variable, true );
In the same way you will do it with codecept_debug the output should be displayed on the terminal of PHPStorm when running the tests.
I am working on Gitlab and I would like to set up a CI (it is the first time I configure something like that, please assume that I am a beginner)
I wrote a code in C with a simple test in Cunit, I configured CI with a "build" job and a "test" job. My test job succeed whereas I wrote a KO test, when I open the job on Gitlab I see the failed output, but the job is marked "Passed".
How can I configure Gitlab to understand that the test failed ?
I think there is a parsing configuration somewhere, I tried in "CI / CD Setting -> Test coverage parsing" but I think it is wrong, and it did not work.
I let you the output of my test :
CUnit - A unit testing framework for C - Version 2.1-2
http://cunit.sourceforge.net/
Suite: TEST SUITE FUNCTION
Test: Test of function::triple ...FAILED
1. main.c:61 - CU_ASSERT_EQUAL(triple(3),1)
Run Summary: Type Total Ran Passed Failed Inactive<br/>
suites 1 1 n/a 0 0<br/>
tests 1 1 0 1 0<br/>
asserts 3 3 2 1 n/a<br/>
Elapsed time = 0.000 seconds
Gitlab supports test reports in JUnit format and coverage reports in cobertura XML format
See the links for C++ examples that may help you, as an example for CUnit, the .gitlab_ci.yaml file should look like:
cunit:
stage: test
script:
- ./my-cunit-test
artifacts:
when: always
reports:
junit: ./my-cunit-test.xml
I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.
How do I have go test several/packages/... stop after the first test failure?
It takes some time to build and execute the rest of the tests, despite already having something to work with.
Go 1.10 add a new flag failfast to go test:
The new go test -failfast flag disables running additional tests after any test fails. Note that tests running in parallel with the failing test are allowed to complete.
https://golang.org/doc/go1.10
However, note this does not work across packages: https://github.com/golang/go/issues/33038
Here's a workaround:
for s in $(go list ./...); do if ! go test -failfast -v -p 1 $s; then break; fi; done
To speed-up the build phase you can run
go test -i several/packages/...
before the tests to build and install packages that are dependencies of the test.
To stop after the first failure you can use something like
go test several/packages/... | grep FAILED | head -n 1
I am implementing a Powershell build script for teamcity to test some functionality, but cannot figure out out to report an error.
I am trying to follow this description:
https://confluence.jetbrains.com/display/TCD8/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-ReportingTests
However, although the script actually results in some tests being registered, it refuses to report errors. I am now back to the basic example from the . I have the following Powershell build step (error output: error, script: source):
Write-Host("##teamcity[testStarted name='className.testName']")
Write-Host("##teamcity[testStdErr name='className.testName' out='error text']")
Write-Host("##teamcity[testFinished name='className.testName']")
Resulting build log (verbose):
[13:27:12]Step 1/5: Output to build log (Powershell)
[13:27:13][Step 1/5] ##teamcity[buildStatisticValue key='buildStageDuration:firstStepPreparation' value='156.0']
[13:27:13][Step 1/5] ##teamcity[buildStatisticValue key='buildStageDuration:buildStepRUNNER_18' value='0.0']
[13:27:13][Step 1/5] Starting: C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -NonInteractive -ExecutionPolicy ByPass -Command - < D:\JetBrains\buildagent\temp\buildTmp\powershell6640337654487221076.ps1
[13:27:13][Step 1/5] in directory: D:\JetBrains\buildagent\work\7e3fac8e390ca38d
[13:27:13][Step 1/5] className.testName
[13:27:13][className.testName] [Test Error Output] error text
[13:27:13][Step 1/5] Process exited with code 0
[13:27:13][Step 1/5] ##teamcity[buildStatisticValue key='buildStageDuration:buildStepRUNNER_18' value='536.0']
I.e. the test is registered in teamcity as executed bit it succeeds! I would expect the test to fail, due to the 'testStdErr' output! What it the correct way to make it fail?
Thanks,
Kim
You should use the testFailed directive which is listed on the page you linked:
##teamcity[testFailed name='MyTest.test1' message='failure message' details='message and stack trace']
Or change the build failure condition settings under 'Build Failure Conditions' to fail the build if you write to stderr (edit: this is my understand of the docs anyway):
Fail build if:
[ ] an error message is logged by build runner