How to get coverage from Bazel by running the program instead of writing unit tests? - unit-testing

My project is build by Bazel. Except for the unit test for every single function, it sometimes needs testing in some scenes like the simulation of the user's input as a black box testing. To ensure the testing scene is complex enough to find vulnerabilities, I want to know the coverage of some testing scenes.
I try to avoid writing unit tests, like using LLVM flags -fprofile-instr-generate -fcoverage-mapping and running the test program directly, interacting with the program, and then getting coverage.
I read from Bazel's documents know the coverage command runs the unit test to get coverage, but can I directly get coverage from a running program?

Related

Unwanted skips in GoogleTest

I am using gtest in VS2019. I have one hundred tests in ten test suites. When I run all tests, and some tests fail, then some test suites are skipped in entirety.
I did not mark any test as skipped.
I feel that this happens when some of the fails are memory issues(invalid pointer etc). When I fix the errors and rerun, everything runs.
Why does this happen? How do I make sure every test runs when I hit "run all"?
you need to change the way you run the test,
for that, what I suggest is create a test project file then, paste your files there and then click the build and run button the tests would open up as terminal and all tests would run, just as gtest works on linux; terminal based

Visual Studio 2015 not running C++ unit tests

This is weird.
Firstly, loading the solution doesn't detect the two unit tests - I have to modify the unit test and do a rebuild for the tests to appear in test explorer.
Once I've done that, I can run a unit test ONCE. After that, I get:
Message: Failed to set up the execution context to run the test
How did it run the test the first time and not the subsequent times? Using depends.exe I can see there's one file missing: Microsoft.VisualStudio.TestTools.CppUnitTestFramework.x64.dll. I tried copying this file to the output directory but it made no difference.
The output directory contains all the files required by the main application to run, so all I've done is place the unit test DLL in that same directory. The test runs once then all subsequent runs die.
Found a solution. On the Test menu, Test Settings turn OFF Keep Test Execution Engine Running. Now I can run any test as much as I need to.

Possible to combine code coverage results (VC++)

I am using VC++ 2005 and 2008 on a project. Now I want to see if the unit test cases cover all the code, and a found a problem. We use Boost.Test for unit testing, and each file is designed to test a particular function or method. Each file is compiled into a separate executable.
I am able to view the results per executable in Visual Studio. What I am really interested in is to view the overall code coverage by all the tests combined. Is there a way to combine the code coverage results?
I don't know about Visual Studio's test coverage tools.
Our SD C++ Test Coverage Tool will combine test coverage vectors from a single instrumented set of source code, no matter how many times you compile/link it (as long as you don't change the source of the code being tested). This tool can be obtained for the Visual Studio dialect(s) of C++. SD's test coverage tools for other languages have this same property.
C++ Coverage Validator can combine results from different code coverage sessions. You can combine sessions interactively using the GUI or from the command line (so you can automate things).
Alternatively you could set up the automatic merging to a central session and get every code coverage session automatically merged into the central session.

Prevent OCUnit tests from running when compilation fails

I'm using Xcode 3.2.2 and the built in OCUnit test stuff. One problem I'm running into is that every time I do a build my unit tests are run, even if the build failed. Let's say I make a syntax error in one of my tests. The test fails to compile and the last successful compilation of the unit tests are run. The same thing happens if one of the dependent targets fail to build - the tests are still run. Which is obviously not what I want.
How can I prevent the tests from running if the build fails? If this is not possible then I'd rather have the tests never run automatically, is that possible? Sorry if this is obvious, I'm an Xcode noob. Should I be using a better unit testing framework?
The answer is to dump OCUnit and use GHUnit which is about a million times better:
http://github.com/gabriel/gh-unit
All you need to do is make the script that runs the unit tests dependent on your test bundle having been built. To do this:
In your Targets group expand your unit test bundle and Get Info on the Run Script.
On the general tab click the + button for the Input Files and enter:
$(BUILT_PRODUCTS_DIR)/$(EXECUTABLE_PATH)

unit test build files

What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.