This question already has an answer here:
How to get all packages' code coverage together in Go?
(1 answer)
Closed 3 years ago.
I'm trying to run a test coverage report on a project. When I run go test ./... -count=1 at the root of my project it lists all the namespaces as they're testing. For this project it looks something like this:
? some/namespace [no test files]
? some/namespace/A [no test files]
? some/namespace/B [no test files]
ok some/namespace/util 0.260s coverage: 100.0% of statements
However when I run go test ./... -coverprofile=coverage.out it only produces lines for my util package and the final line at the bottom shows a test coverage of 100%. While I did get 100% of that namespace covered, I would expect that if I ask for test coverage of an entire project that the total percentage would include packages that don't have tests.
How can I get a proper percentage of test coverage for my whole project?
try to use additional flags:
go test ./... -covermode=atomic -coverprofile=coverage.out -coverpkg=./...
Related
I'm relatively new to Haskell, so apologies in advance if my terminology is not quite correct.
I would like to implement some plain unit test for a very simple project, managed through cabal. I noticed this very similar question, but it didn't really help. This one didn't either (and it mentions tasty, see below).
I think I can accomplish this by using only HUnit - however, I admit I am a bit confused by all other "things" that guides on the net talk about:
I don't quite appreciate the difference between the interfaces exitcode-stdio-1.0 and detailed-0.9
I am not sure about the differences (or mid- and long-terms) implication of using HUnit or Quickcheck or others?
What's the role of tasty that the HUnit guide mentions.
So, I tried to leave all "additional" packages out of the equation and everything else as "default" as much as I could aside and did the following:
$ mkdir example ; mkdir example/test
$ cd example
$ cabal init
Then edited example.cabal and added this section:
Test-Suite test-example
type: exitcode-stdio-1.0
hs-source-dirs: test, app
main-is: Main.hs
build-depends: base >=4.15.1.0,
HUnit
default-language: Haskell2010
Then I created test/Main.hs with this content:
module Main where
import Test.HUnit
tests = TestList [
TestLabel "test2"
(TestCase $ assertBool "Why is this not running," False)
]
main :: IO ()
main = do
runTestTT tests
return ()
Finally, I tried to run the whole lot:
$ cabal configure --enable-tests && cabal build && cabal test
Up to date
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (additional components to build)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (ephemeral targets)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Running 1 test suites...
Test suite test-example: RUNNING...
Test suite test-example: PASS
Test suite logged to:
/home/jir/workinprogress/haskell/example/dist-newstyle/build/x86_64-linux/ghc-9.2.4/example-0.1.0.0/t/test-example/test/example-0.1.0.0-test-example.log
1 of 1 test suites (1 of 1 test cases) passed.
And the output is not what I expected.
I'm clearly doing something fundamentally wrong, but I don't know what it is.
In order for the exitcode-stdio-1.0 test type to recognize a failed suite, you need to arrange for your test suite's main function to exit with failure in case there are any test failures. Fortunately, there's a runTestTTAndExit function to handle this, so if you replace your main with:
main = runTestTTAndExit tests
it should work fine.
I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run
I am working on Gitlab and I would like to set up a CI (it is the first time I configure something like that, please assume that I am a beginner)
I wrote a code in C with a simple test in Cunit, I configured CI with a "build" job and a "test" job. My test job succeed whereas I wrote a KO test, when I open the job on Gitlab I see the failed output, but the job is marked "Passed".
How can I configure Gitlab to understand that the test failed ?
I think there is a parsing configuration somewhere, I tried in "CI / CD Setting -> Test coverage parsing" but I think it is wrong, and it did not work.
I let you the output of my test :
CUnit - A unit testing framework for C - Version 2.1-2
http://cunit.sourceforge.net/
Suite: TEST SUITE FUNCTION
Test: Test of function::triple ...FAILED
1. main.c:61 - CU_ASSERT_EQUAL(triple(3),1)
Run Summary: Type Total Ran Passed Failed Inactive<br/>
suites 1 1 n/a 0 0<br/>
tests 1 1 0 1 0<br/>
asserts 3 3 2 1 n/a<br/>
Elapsed time = 0.000 seconds
Gitlab supports test reports in JUnit format and coverage reports in cobertura XML format
See the links for C++ examples that may help you, as an example for CUnit, the .gitlab_ci.yaml file should look like:
cunit:
stage: test
script:
- ./my-cunit-test
artifacts:
when: always
reports:
junit: ./my-cunit-test.xml
I'm using tfs 2017 to run CodedUI ordered unit tests. These are my build steps:
These are my "Run Functional Tests" configurations:
And these are the "Publish Test Results" (I'm not sure they are correct):
The TestAgent is deployed and the tests are running fine, The problem is that the test results appear as only one result and I can't see detailed result for each test. This is how my tests results looks like (The attachments are screen shots I take for each test):
I can reproduce your situation and TFS will treat the tests as one in the log:
DistributedTests: Total Tests : 1, Passed Tests : 0
It's a know issue, please refer below link:
How to display test results of individual test in the ordered test
suite in TFS Web
This is a limitation of the run functional test task. You can
publish the .trx file using the "publish test results" task and it
will show you all the tests but you wont know which ".orderedtest"
they were associated with etc.
You need either open *.trx file in Visual Studio or use Publish Test Results task(need to check continue on error).
Besides also change the outcome from failed to all in the test result page.
I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.