Code Coverage on Windows with proprietary testing automation - c++

We have our own test automation software which executes our product exe. We do not have test cases written in C++ but our code is written in C++.
What we want is to run out automation tool on our exe which will run the test suite and then find the lines of code that have been executed (code-coverage).
Is there any way to do the above? Something similar to LCOV?

Semantic Designs' (my company) C++ Test Coverage Tool could be used for this for either MS C++ or GCC.
The tool instruments your source code before you compile it. The compiled binary is executed by whatever means; as it runs, the instrumentation collects test coverage information, and occasionally writes that data to a special file. That file is then analyzed/displayed by a special UI.
If you can get your automation tool to signal when an individual test is complete (this may happen as a natural "last action" on each test, or by other convention), then the test coverage data can be captured on a per-test basis to give you a finer-grain view of the coverage data.

Related

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Would it be possible to forward MS Unit tests from an EXE to a DLL?

I have an application where there is a mess of code that has a bunch of non-isolated components. This makes things difficult in terms of doing some unit testing. So along with some unit tests in their own separate test DLL, I'm trying to also create some tests within the application DLL. The application DLL is normally invoked from an application EXE.
For some background, this code is 20+ years old written in native C++. I cannot execute the tests in the DLL directly as the framework is not setup, so any calls executed within the DLL will not execute correctly. I've unsuccessfully tried to do this already, but maybe I need a more fundamental understanding of the MFC framework to do this.
A colleague suggested that maybe it might be possible to have the vstest.console somehow run the tests through the EXE where the framework can be brought up, run the tests through the EXE, which are then forwarded to the DLL, and then have the test results returned back through the EXE to vstest.console, effectively making the EXE a proxy of sorts.
I'm thinking that this might be a longshot, but I'm at a loss as how I can run the tests in the DLL properly. Could it be done? Is there a better way?
For legacy EXE, you may use Generic Test (for console app) or Coded UI Test (for GUI app). Technically, Generic Test or Coded UI Test is System Level Test. You can still get Code Coverage for the two tests.
More on Generic Test
Use a generic test to wrap a console app or test harness console that
• Runs from a command line
• Returns error code: 0 <- Pass; Nonzero <- Fail
• Positive tests only for a console app; A test harness may include negative tests within.
Visual Studio treats generic test just like other tests, but
• Add Generic Tests into Unit Test Project type
• Command line must run .GenericTest file instead of UnitTest1.dll
• vstest.console GenericTest1.GenericTest
NOTE: Set the “run duration” long enough for your EXE.

How do I mark a unit test as skipped in Netbeans test results window? (C++)

tl;dr: I am writing a C++ project using netbeans and am looking for a way to mark a unit test as skipped.
Details:
I am using Netbeans IDE for C++ development. When adding a unit test (steps here), the IDE generates C++ code with output looking like this:
%SUITE_STARTING%
%SUITE_STARTED%
%TEST_STARTED% time=0 testname (suitename)
%TEST_FAILED%
%TEST_FINISHED%
%SUITE_FINISHED%
This is an output format that Netbeans parses nicely and displays in a test results window.
I have updated my test code with a unit test class and a test suite class that generate this output dynamically, and it works.
My problem is that I would like to mark a unit test as "skipped" (as in, not succeeded, not failed, not executed at all). There are various reasons for this (specifying tests before implementing them, skipping a test because it is blocked by a known defect, etc).
Question: Is this supported by the IDE (the test results window has a "show skipped tests" button but with no effect as far as I can see), and what kind of output token should the code generate for a test to be "skipped"? (I have tried %TEST_SKIPPED% and %TEST_SKIPPED% testname but the test results window simply ignores the line.
tl;dr Not supported by the IDE out of the box, in order to get this working you would need to patch the IDE at the very least.
Details :
Netbeans does support skipped tests as per here.
However the logic for updating the UI from parsing the output of the test runner as per here doesn't support skipped tests regardless of whether the built in Simple Test or CUnit Test is used.
In order to get something skipped test support working you would need to generate the relevant SkippedTestHandler as part of CndUnitHandlerFactory.java, along with any other modifications to CodeGeneration which is done as part of the Simple Unit Test built in or any unit test framework which you add support by creating a custom module for following the instructions here.
Also if you wanted to use CUnit you would have also give it support for skipped tests with a patch similar to here

Obtaining C++ Code Coverage

I'm on Linux.
My code is written in C++.
My program is non-interactive; it runs as "./prog input-file", processes the file, and exits.
I have various unit tests "input-file0, input-file1, input-file2, ..."
For designing new unit tests, I want to know what lines of code existing tests do not cover.
Question: Given that I control how "prog" is compiled/run; how can I get list of the lines of code that "./prog input-file" does not hit?
Thanks!
EDIT: I'm currently using g++; but perfeclty happy to switch to LLVM if it makes this possible.
gcc comes with a code coverage testing tool (gcov):
http://gcc.gnu.org/onlinedocs/gcc/Gcov.html

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.