SonarQube scanner failing with line out of range - build

We have AzureDevops build pipeline. Where we have the following steps.
Prepare Analysis for SonarQube
Run unit tests
Run integration tests
Run code analysis
For #4, when we try to Run Code Analysis, it is giving some weird error from SonarQube scanner.
java.lang.IllegalStateException: Line 92 is out of range in the file
But file has only 90 lines of code. I am not sure why it is complaining this?

SonarQube scanner failing with line out of range
In general, this issue occurred with one file that went down on number of lines, then sonar use the cache, that is why it looked for a line out of range.
Just like user1014639 said:
The problem was due to the old code coverage report that was generated
before updating the code. It was fixed after generating coverage
reports again. So, please also make sure that the any coverage reports
that are left behind from the previous run are cleared and new
coverage reports are in place.
So, please try to run the command line:
mvn clean test sonar:sonar
to clean the old report.
Besides, if above not help you, you should make sure analyzed source code is strictly identical to the one used to generate the coverage report:
Check this thread for some details.
Hope this helps.

Related

gcov not detecting correct kunit tests

I started to work on a series of unit tests for different kernel modules I am currently writing. To that goal, I am using the the excellent KUnit framework.
Following the simple test described on the KUnit webpage, I was able to create a first test series that compiles, runs and displays the results as expected.
For me the next step was to use code coverage on those results to generate a report of the quality of the coverage of the testing strategies on the different modules.
The problem comes when I open the results of the code coverage. It indicates that no lines have been parsed by my tests in the module I am writing. I know for a fact that this is not the case because I generated in the test function a failed test using:
KUNIT_FAIL(test, "This test never passes.");
And kunit.py reports that the test failed. Even the source code for the test was not reported as being covert...
Does someone have a idea on how to solve this?
I'll post the answer to my own question, hoping this will help someone in the same situation.
There were two issues:
The GCC version (as stated in the documentation of Kunit) must be inferior to 7.xx
The code coverage for kunit is available starting at 5.14.
Once I used gcc 6.5.0 and a 5.15 kernel, followed the manual in kunit for code coverage, everything worked as hoped!
Thanks for providing the information.
I am also working on the code coverage part in kunit. I am also facing an issue with that where I am unable to generate .gcda files and some abnormal things that I observed are given below:
unable to enable the CONFIG_GCOV_PROFILE_ALL.
generated coverage.info file with no content.
.gcda file is not generated.
I received a warning while running the below command
lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
WARNING: no .gcda files found in .kunit/ - skipping!
and received an error while running the below command
genhtml -o /tmp/coverage_html coverage.info
ERROR: no valid records found in tracefile coverage.info
The links for the sources that I have followed are given below: Could you please support me in the right direction to generate code coverage in kunit.
kunit tutorial
gcov in kernel

Merge Validation fails when running with Code Coverage when doing unit test through cloudtest

We are trying to run Code Coverage tests using cloud test and it fails with the below exception:
System.Security.VerificationException: Operation could destabilize the runtime.
The Tests pass if we run without the code coverage, but when code coverage is enabled it fails with the above error.
Things tried:
We tried running the code by adding : [assembly: SecurityRules(SecurityRuleSet.Level1, SkipVerificationInFullTrust = true)] in the AssemblyInfo.cs files, but did not help.
Could someone help shed some light why this might be happening or any fix for this?
Thanks in Advance.
You can try to set True in runsettings for the vstest task and enable the code coverage check box. (https://learn.microsoft.com/en-us/visualstudio/test/configure-unit-tests-by-using-a-dot-runsettings-file?view=vs-2019)
<UseVerifiableInstrumentation>True</UseVerifiableInstrumentation>
The fix is present in the version of the vstest task greater than 2.153.0.
vstestplatform version needs to be greater than 16.1 (anything greater than 16.0.2 also works)
You can check https://github.com/microsoft/vstest/pull/1997 for more details.
You can use the the combination of:
Latest Newtonsoft.Json 12.0.2 (doesn't work with 12.0.1)
Latest stable Test platform: 16.2 (should be >16.0.2)
Latest VSTest task 2.156.2 (should be >2.153.0)
Here is a ticket with the same issue you can refer to.

Not getting any test results with xunitmultiprocess

I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.

Teamcity running build steps even when tests fail

I am having problems with Teamcity, where it is proceeding to run build steps even if the previous ones were unsuccessful.
The final step of my Build configuration deploys my site, which I do not want it to do if any of my tests fail.
Each build step is set to only execute if all previous steps were successful.
In the Build Failure Conditions tab, I have checked the following options under Fail build if:
-build process exit code is not zero
-at least one test failed
-an out-of-memory or crash is detected (Java only)
This doesn't work - even when tests fail TeamCity deploys my site, why?
I even tried to add an additional build failure condition that will look for specific text in the build log (namely "Test Run Failed.")
When viewing a completed test in the overview page, you can see the error message against the latest build:
"Test Run Failed." text appeared in build log
But it still deploys it anyway.
Does anyone know how to fix this? It appears that the issue has been running for a long time, here.
Apparently there is a workaround:
So far we do not consider this feature as very important as there is
an obvious workaround: the script can check the necessary condition
and do not produce the artifacts as configured in TeamCity.
e.g. a script can move the artifacts from a temporary directory to the
directory specified in the TeamCity as publish artifacts from just
before the finish and in case the build operations were successful.
But that is not clear to me on exactly how to do that, and doesn't sound like the best solution either. Any help appreciated.
Edit: I was also able to workaround the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.
This is a known problem as of TeamCity 7.1 (cf. http://youtrack.jetbrains.com/issue/TW-17002) which has been fixed in TeamCity 8.x+ (see this answer).
TeamCity distinguishes between a failed build and a failed build step. While a failing unit test will fail the build as a whole, unfortunately TeamCity still considers the test step itself successful because it did not return a non-zero error code. As a result, subsequent steps will continue running.
A variety of workarounds have been proposed, but I've found they either require non-trivial setup or compromise on the testing experience in TeamCity.
However, after reviewing a suggestion from #arex1337, we found an easy way to get TeamCity to do what we want. Just add an extra Powershell build step after your existing test step that contains the following inline script (replacing YOUR_TEAMCITY_HOSTNAME with your actual TeamCity host/domain):
$request = [System.Net.WebRequest]::Create("http://YOUR_TEAMCITY_HOSTNAME/guestAuth/app/rest/builds/%teamcity.build.id%")
$xml = [xml](new-object System.IO.StreamReader $request.GetResponse().GetResponseStream()).ReadToEnd()
Microsoft.PowerShell.Utility\Select-Xml $xml -XPath "/build" | % { $status = $_.Node.status }
if ($status -eq "FAILURE") {
throw "Failing this step because the build itself is considered failed. This is our way to workaround the fact that TeamCity incorrectly considers a test step to be successful even if there are test failures. See http://youtrack.jetbrains.com/issue/TW-17002"
}
This inline PowerShell script is just using the TeamCity REST API to ask whether or not the build itself, as a whole, is considered failed (the variable %teamcity.build.id%" will be replaced by TeamCity with the actual build id when the step is executed). If the build as a whole is considered failed (say, due to a test failure), then this PowerShell script throws an error, causing the process to return a non-zero error code which results in the individual build step itself to be considered unsuccessful. At that point, subsequent steps can be prevented from running.
Note that this script uses guestAuth, which requires the TeamCity guest account to be enabled. Alternately, you can use httpAuth instead, but you'll need to update the script to include a TeamCity username and password (e.g. http://USERNAME:PASSWORD#YOUR_TEAMCITY_HOSTNAME/httpAuth/app/rest/builds/%teamcity.build.id%).
So, with this additional step in place, all subsequent steps set to execute "Only if all previous steps were successful" will be skipped if there are any previous unit test failures. We're using this to prevent automated deployment if any of our NUnit tests are not successful until JetBrains fixes the problem.
Thanks to #arex1337 for the idea.
Just to prevent confusion, this issue is fixed in Team City v8.x, We don't need those workarounds now.
You can specify the step execution policy via the Execute step option:
Only if build status is successful - before starting the step, the build agent requests the build status from the server, and skips the step if the status is failed.
https://confluence.jetbrains.com/display/TCD8/Configuring+Build+Steps
Of course you need to fail the build if at least one unit test failed:
https://confluence.jetbrains.com/display/TCD8/Build+Failure+Conditions
On the Build Failure Conditions page, the Fail build if area, specify when TeamCity will fail builds:
at least one test failed: Check this option to mark the build as failed if the build fails at least one test.
This is (as you have found) a known issue with TeamCity, there are a set of linked issues in their Issue Tracker. This issue is hopefully scheduled to be resolved in the next release of TeamCity (version 8.x)
In the mean time, the way we identified to resolve the issue (for version 6.5.5) was to download the test results file as part of the later steps. This was then parsed to check for any test failures, returning an error code and hence breaking the build properly (performing any cleanup we needed as part of that failure) which would probably work for you.
TeamCity build failure does not mean that it will stop the build and it will publish the artifacts if your build is providing the the build output files as required by TeamCity. It will only update the build status properly.
But, you can very well stop the build process by modification to your build script to stop the build on test case failure. If you are using MSBuild, then ContinueOnError="false" will do that.
In the end, I was able to solve the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.

MSBuild: build results shows 'no code coverage' while importing test-result does

I have a strange problem,
my MSBuild runs tests, code-coverage and publishing fine (part of the build.txt shown):
Results Top Level Tests
------- ---------------
Passed BuildTestProject.UnitTest1.TestMethod1
Passed BuildTestProject.UnitTest1.TestMethod2
2/2 test(s) Passed
...
Results file: W:\BuildWorkspace\XXX\Test Release\TestResults\XXX_XXX 2009-08-20 11_47_09_Any CPU_Release.trx
Run Configuration: Local Test Run
Waiting to publish...
Publishing results of test run XXX#XXX 2009-08-20 11:47:09_Any CPU_Release to http://XXX:8080/Build/v1.0/PublishTestResultsBuildService2.asmx...
....Publish completed successfully.
When I import these testresults on my local machine i get to see the code coverage-data as expected. But the code-coverage details are not shown in the details of the build that Visual Studio shows when you expand the 'results details'.
Any tips?
Finally found the solution today, found out that my TeamFoundationServer itself had problems with publishing the results to my buildAgent. I read somewhere (after searching for CoverAn.exe) that it is installed as a service, then i checked the credentials for the service 'Code Coverage Analysis Service'.
These were running under TFSMachine\NETWORK instead of our service-account for the TeamFoundationServer. Changed this, reran a build with tests in it and it now publishes the results.
W00t!
Check that the .testrunconfig used by the server has coverage enabled. It may be different than your local .testrunconfig.