Showing 'E' in jest test coverage report - unit-testing

I am working in jest environment and when looking for test coverage its showing in one line uncover with symbol E, what is that ?

I found it, E stands for else path not taken, which means that for the marked if/else statement, the if path has been tested but not the else.

Related

Why test assemblies are not filtering in VSTS azure build pipeline despite putting test assembly patterns?

Here is my test assembly patterns (configuration)
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\$(BuildConfiguration)\*Integration*
After triggering build, here is the log where integration test assembly is also there (this file must be filtered and should be here)
2019-04-23T13:10:33.6689787Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Test\bin\Release\myapp.Services.Test.dll
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll
Becuase of this integration test cases are also running and I want to run only unit test cases.
Any idea?
I've found the solution, here is my latest configuration for the same which working absolutely as expected now.
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\myapp\*Integration*\**
!**\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Integration.Test*.dll
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.TestPlatform*
!**\$(BuildConfiguration)\*MSTest*
!**\$(BuildConfiguration)\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll*
If you notice the line which says that exclude path which contains this pattern;
!**\myapp\*Integration*\**
and below pattern matches and will not be included in the result.
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll

How to easily find out which tests fail

I test my code with go test ./... -v -short.
Unfortunately, -v only prints out each test as it happens, but does not leave a summary of the results at the bottom like in Java. This means that if any test failed somewhere at the top, I have to scroll up and look for the word FAIL or search for it in a text editor.
The -failfast flag isn't helping either because some of my tests still get printed after the first test failure for some reason.
I don't really care if tests get run after the initial test failure. I just want to be able to easily tell if any test failed, preferably in just one place (e.g. a summary of how many tests passed or failed, or by seeing a flag if all tests passed or not).
Is there a way to easily tell if there was a test failure because I don't want to accidentally continue coding if I still have test failures.
I'm on Windows 10 64-bit.
UPDATE: Many thanks to #icza for the findstr tip. I later realized that I also wanted to see the error descriptions along with the test failures, but did not want to run go test twice. This is what I came up with for CMD (does not work on Powershell):
go test ./... -v -short > test-results.txt & findstr "FAIL _test" test-results.txt
Now findstr should report test failures as well as error descriptions. And if you want to see the full test results, simply open test-results.txt.
Failing tests are indicated with FAIL in the output. So all you have to do is filter the output for that word.
On Unix systems:
go test ./... |grep FAIL
On Windows:
go test ./... |findstr FAIL
Note that this is purely text processing, it doesn't know anything about go tests and their results. This means you might get "false positives" if a test outputs the word FAIL even if it succeeds. But in practice, this pretty much does the job you want.
A more sophisticated and more accurate way to achieve this would be to pass -json flag to go test, so it generates JSON output, which you can process with a program (e.g. written in Go itself). Failing tests are indicated with a JSON object having an "Action":"fail" field, e.g.
{"Time":"2019-03-01T12:06:21.108544405+01:00","Action":"fail",
"Package":"some/package","Test":"TestSomething","Elapsed":0.01}
And even if you don't want to write a program for this, filtering the JSON output leaves less chance for false positive (filtering for "Action":"fail"):
Unix:
go test ./... -json |grep '"Action":"fail"'
Windows:
go test ./... -json |findstr /C:"\"Action\":\"fail\""
I found it painless to install gotestsum and get the neat summary at the end.
go install gotest.tools/gotestsum#latest
gotestsum --format testname # Or dots
An alternative, if you only care about the count is:
go test |grep FAIL |wc -l

SonarQube: see details of failed tests

In SonarQube 5.6.6, I can see on http://example.com/component_measures/metric/test_failures/list?id=myproject that my unit test results were successfully imported. This is indicated by
Unit Test Failures: 1
which I produced by a fake failing test.
I also see the filename of the failing test class in a long list, and I see the number of failed tests (again: 1).
But I can't find any more information: which method, stack trace, stdout/err, just everything which is also included in the build/reports/test/index.html files generated by gradle? Clicking to the list entry points me to the code and coverage view, but I can't find any indicator, which test failed.
Am I doing something wrong in the frontend, is it a configuration problem, or am I looking for a feature which doesn't exist in SonarQube?
This is how it looks currently:
http://example.com/component_measures/domain/Coverage: Here I see that one test failed:
http://example.com/component_measures/metric/test_success_density/list: I can see which file it is:
But clicking on the line above only points me to the source file. Below the test which "failed". There is no indication that this test failed. And I can't find any way to see stack trace or the name of the failed test method:
Btw: The page of the first screenshot show Information about unit tests. But if the failing test is an integration test, I don't even see these numbers
Update
Something like this is probably what I'm looking for:
(found on https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/)
I never saw such a view on my installation, don't know how to get it and if it is implemented in the current version.
Unfortunately Test execution details is a deprecated feature Sonar Qube 5.6
If you install older version such as Sonar Qube 4.x, we will get following screen which provides test case result details.
But this screen itself has been removed.
Ref # https://jira.sonarsource.com/browse/SONARCS-657
Basically the issue is Unit test case details report requires links back to source code files. But now the unit test cases are only linked to assemblies.

VS2010 and Create Unit Tests... no tests generated

I'm trying to add some unit tests to an existing code base using Visual Studio 2010's unit test generator. However, in some cases when I open a class, right click --> Create Unit Tests..., after I select the methods to generate tests for it will create what is essentially a blank test. Are there situations where this can happen? In every case I select at least one public method to gen tests for, and all it generates is this:
using TxRP.Controllers; //The location of the code to be tested
using Microsoft.VisualStudio.TestTools.UnitTesting;
That's it. Nothing else. Strange, right?
I should note that this is all MVC 2 controller code, and I have been able to gen tests for other controllers with no problem, and all my controllers follow pretty much the same format. No error seems to be thrown, as it gens the empty page happily and adds it to the project as if everything is just fine.
Has anyone had experience with the same type of thing happening, and was there any answer found as to why?
UPDATE:
There is in fact an error during generation:
While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
After some research, the only possible solution I found is that this error occurrs if you're trying to generate tests to a test file that already exists. However, this solution is not working for me...
If you try to generate tests for a class which already has existing tests in another file in the project, it will just generate an empty file as described above. Changing the filename is not sufficient, nor is using a different location within the project. Basically it seems to enforce the one-testfile-per-class convention across the entire project.
This problem is caused by the previously generated test file having been moved to a folder other than the root folder in the test project.
Resolution
Move the test file into the test project root folder.
Generate the new tests
Move the test file back to the folder location you want in the test project.
I have no clue why they dont call it a BUG! in a typical enterprise level software development it is more than a coincidence where multiple people generate unit tests for different methods of the same class # different points of time.
We always end up with this error and it is not helping us any way! Feels as if the Context Menu "Create Unit Tests" has lil use!
Error description:
"While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
"

CPP unit setup for C++

In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.