I started to work on a series of unit tests for different kernel modules I am currently writing. To that goal, I am using the the excellent KUnit framework.
Following the simple test described on the KUnit webpage, I was able to create a first test series that compiles, runs and displays the results as expected.
For me the next step was to use code coverage on those results to generate a report of the quality of the coverage of the testing strategies on the different modules.
The problem comes when I open the results of the code coverage. It indicates that no lines have been parsed by my tests in the module I am writing. I know for a fact that this is not the case because I generated in the test function a failed test using:
KUNIT_FAIL(test, "This test never passes.");
And kunit.py reports that the test failed. Even the source code for the test was not reported as being covert...
Does someone have a idea on how to solve this?
I'll post the answer to my own question, hoping this will help someone in the same situation.
There were two issues:
The GCC version (as stated in the documentation of Kunit) must be inferior to 7.xx
The code coverage for kunit is available starting at 5.14.
Once I used gcc 6.5.0 and a 5.15 kernel, followed the manual in kunit for code coverage, everything worked as hoped!
Thanks for providing the information.
I am also working on the code coverage part in kunit. I am also facing an issue with that where I am unable to generate .gcda files and some abnormal things that I observed are given below:
unable to enable the CONFIG_GCOV_PROFILE_ALL.
generated coverage.info file with no content.
.gcda file is not generated.
I received a warning while running the below command
lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
WARNING: no .gcda files found in .kunit/ - skipping!
and received an error while running the below command
genhtml -o /tmp/coverage_html coverage.info
ERROR: no valid records found in tracefile coverage.info
The links for the sources that I have followed are given below: Could you please support me in the right direction to generate code coverage in kunit.
kunit tutorial
gcov in kernel
Related
We have AzureDevops build pipeline. Where we have the following steps.
Prepare Analysis for SonarQube
Run unit tests
Run integration tests
Run code analysis
For #4, when we try to Run Code Analysis, it is giving some weird error from SonarQube scanner.
java.lang.IllegalStateException: Line 92 is out of range in the file
But file has only 90 lines of code. I am not sure why it is complaining this?
SonarQube scanner failing with line out of range
In general, this issue occurred with one file that went down on number of lines, then sonar use the cache, that is why it looked for a line out of range.
Just like user1014639 said:
The problem was due to the old code coverage report that was generated
before updating the code. It was fixed after generating coverage
reports again. So, please also make sure that the any coverage reports
that are left behind from the previous run are cleared and new
coverage reports are in place.
So, please try to run the command line:
mvn clean test sonar:sonar
to clean the old report.
Besides, if above not help you, you should make sure analyzed source code is strictly identical to the one used to generate the coverage report:
Check this thread for some details.
Hope this helps.
I'm getting started with AceUnit in Ubuntu but I don't know how to see the result of the Unit test
The first thing that I made, was trying with the example test called "basic". I generated the AceUnit.jar file with the command
make -s -j
Then I pasted in the directory of the example, the next step was to generate the "AceUnitTest.h" file with the command
java -jar AceUnit.jar AceUnitTest >AceUnitTest.h
After that I run the makefile and compiled correctly but I don't see any file or some kind of graphic way to probe that the test was successful. Maybe there is something that I have been doing wrong an I hope someone could help me with this.
While searching for an HTTP client C++ based library, I have decided to use the casablanca -- so I needed to build it.
I'm running on Ubuntu 16.04.
While following the "common" build steps described here: How-to-build-for-Linux I have encountered a build error (when running the make command as the last operation of step 4).
The entire error output can be found here (now it is the last comment in the thread cpprestsdk-build-error#266).
Just to be sure my system has the needed build tools and libraries I performed the command mentioned in step 2 and this is the output:
--> Which means my system is "good to go".
So after I struggled it a little more, I have found "an alternative" way to build it:
I have downloaded the source code from here: Source Package: casablanca (2.8.0-2) [universe], and again, followed the same instructions STARTING FROM STEP 4 from the link mentioned in the question (How-to-build-for-Linux).
This time the make phase was successful !! (it is worth to mention that not all the unit tests that are recommended to be run on step 5 passed - I did not spend time trying to understand why...).
Went on and "copied-pasted" the complete sample provided at the bottom of the following link: cpprestsdk-Getting-Started-Tutorial.
Built the program with the following command (the program contained a single cpp file called main):
g++ -std=c++11 main.cpp -o myProg -lboost_system -lcrypto -lssl -lcpprest
Ran the program
./myProg
and it passed (there was output in the console saying:"Received response status code:200").
Would be glad to hear if you have encountered the same issue, or perhaps I did something wrong in my first attempt (or in any other step along the way).
I made a simple C++ project which I hooked up to travis and coveralls. As far as I know, I'm uploading the reports correctly as my source files are shown in coveralls and are 100% covered.
However, the project coverage shows 0%. Why, and how to fix?
This is due to mismatched gcov and g++ versions.
The build logs give the following messages
adder.cpp.gcno:version '408*', prefer '406*'
This is why the columns all register 0 in the above screenshot. When the gcov and g++ versions match, the output looks like the following
Coveralls just makes this error a little tricky to uncover because each file is flagged as 100% covered if there are no relevant lines, but the summary shows 0% for this state
When I execute go test for a whole package the tests fail with:
$ go test github.com/dm03514/go-edu-db/...
# github.com/dm03514/go-edu-db/backends
go1: internal compiler error: in read_type, at go/gofrontend/import.cc:669
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gccgo-4.9/README.Bugs> for instructions.
FAIL github.com/dm03514/go-edu-db/backends [build failed]
? github.com/dm03514/go-edu-db/cmd [no test files]
# github.com/dm03514/go-edu-db/httpd
go1: internal compiler error: in read_type, at go/gofrontend/import.cc:669
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gccgo-4.9/README.Bugs> for instructions.
FAIL github.com/dm03514/go-edu-db/httpd [build failed]
? github.com/dm03514/go-edu-db/logging [no test files]
While the above tests fail go install builds correctly and I can run each of my individual tests correctly:
$ go test github.com/dm03514/go-edu-db/backends/backends_test.go
ok command-line-arguments 0.025s
go test github.com/dm03514/go-edu-db/httpd/handlers_test.go
ok command-line-arguments 0.021s
Has anyone ran into this before? I am brand new to Go, and to get around this I have just been executing each one of my test files individually.
The output of go build is nothing
$ go build github.com/dm03514/go-edu-db/...
$
go version is
$ go version
go version xgcc (Ubuntu 4.9-20140406-0ubuntu1) 4.9.0 20140405 (experimental) [trunk revision 209157] linux/amd64
This happened to me as well. I ended up just commenting out different tests until I was able to see useful output and to see when it would start passing. The root cause was one of my concurrently running test goroutines was calling t.Errorf (specifically, I was using the testify/assert package, but this eventually calls t.Errorf) after the test was completed. The output using go test -v eventually had this error message:
Fail in goroutine after TestTradeReader_Subscribe has completed
For me, this happened because I was using an httptest.Server (which runs concurrently during my test) and was checking input on a test case that exited quickly and didn't require this check.
The thing that helped me.. If you use plenty of tests in a loop and you create some of the mocked services OUTSIDE the loop, it may cause some problem.
TO SOLVE THIS: just move your mocked objects creation for your complex tests inside the loop and it will be done!
There is probably a routine leak. You maybe be modifying/updating a global variable in the test and not reverting for the second test.
Second reason for this error could be your test in not running in a closed env. and effecting other test after.
you can re-structure your test so that the test giving error runs at first so that it succeeds