Coverage tests for Raku modules? - unit-testing

Apparently, there are no coverage test modules in the ecosystem, and the only reference to something similar is the coverage tests in CommaIDE, which unfortunately are not present in the community (free) edition.
There seems to be some coverage at a lower level, MoarVM, but I don't see any way to do this easily in Raku modules. Is there maybe some simple language support for this?

The Comma IDE makes use of the MoarVM coverage output, which it parses, aggregates, and presents (using its model of the source code to figure out statement extents and which statements are coverable in order to generate the statistics).
The only other thing I'm aware of that currently exists to parse this output is this script. The MoarVM coverage support was originally developed in order to understand specification test coverage of the core built-ins, and the script makes a report of those. However, the mechanism that was put into MoarVM is actually more general, and so can be used to get the raw coverage data for any program. To my knowledge, however, the script I linked and Comma are the only tools built so far that analyze it.

Related

Is that necessary to unit test code generated by Apache Thrift?

We are presenting unit tests for a C++ project. Our goal is to cover up to 60% the whole project. The project uses a lot of code generated by Apache Thrift for communication between Client and Server.
Should we make unit tests for the generated code? If we don't introduce unit tests for the code, some coverage tools will be complaining that we are neglecting a big part of the project.
Does Apache Thrift provide those unit tests already?
There are a bunch of tests that run on the CI servers to ensure
the Thrift compiler can be built and works
the language bindings can be built and work
the language bindings operate as expected (including cross-lang tests)
In addition to the Apache Thrift Test Suite each language binding may or may not define additional tests to test specific things as needed, e.g. as shown here.
Regarding "is it necessary": First, we all make errors (except Linus of course). I would probably not go as far and test the entire thing again, but I would test those pieces in my own code that implement a certain behaviour using that 3rd party library.
I don't know Apache Thrift specifically, but it's probably safe to assume that any "big name" library/framework will have their own tests to try to ensure that it isn't fundamentally broken. However:
Those tests can't test your use of the library/framework. From a 30-second search, Thrift uses an Interface Definition Language, so if there are "problems" in the IDL code you write, Thrift may generate incorrect code (it may complain, but think the equivalent of undefined behaviour in C/C++).
Their tests won't test how your code uses the automatically generated code. You might be using it in ways it wasn't intended to be used, and so their "correct" code can still give the wrong results (for your application).
New releases of Thrift may behave differently than earlier versions, in ways that break your use of it. Assuming they're mentioned, scouring Release Notes for such changes is error-prone, so you may want some tests on features you rely on in the rest of your application. Especially important if you keep more-or-less automatically up-to-date with the latest version.
So, you probably don't need to unit-test it to the degree you would if you had written the code, but you will want tests near the generated code to detect problems with your use of it and/or changes in behaviour as new versions are released.

Dynamic dead code elimination tools for complex C++ projects

We have a project with a lot of code, part of it is legacy.
As part of the work flow, every once in a while, all the functionality of the product is checked.
I wonder if there is a way to use this fact to dynamically check which parts of the code were never used? (The difficult part is the C++ code, the .Net and Java are more under control and have less legacy).
Also - are there dynamic dead code elimination tools are there that can work with lots of code and complex projects (i.e. ~1M lines)?
All the similar questions I found talked about static analysis which we all ready do.
Thank you!
You might want to look at the code coverage tools that are used in testing. The idea of these tools is that they instrument the code and after running the set of tests you know what lines of code were executed at least once and what lines were never executed. After that you can improve tests.
The same thing can be used to identify dead code in case if you have diverse enough execution environment.
I don't know what platform you are on but we have used Gcov with success if you're compiling with the gnu toolchain:
http://gcc.gnu.org/onlinedocs/gcc/Gcov.html

Adding source instrumentation code - Is source-to-source compiler right approach? How to build one?

I am working on a project where I need to track changes to particular set of variables in any given application code to model memory access patterns.
I can think of two approaches mainly, please give your thoughts on them.
My initial thought is to do it like many profilers like gprof would do, where I add instrumentation code in the target application code before compilation and analyze the log generated by this instrumentation code to the get required information.
To accomplish, I can only think of some sort of source-to-source compiler where it parses given code and injects instrumentation code (Same language source-source compiler) into application which I can later compile and run to get the required logs.
Does this seem right or am I over-engineering? If not, are there tools that let me build a source-source compiler (relatively) easily?
I read about GDB's support for python, so, I am thinking if I can write a python script to get set of variables as config file and set watchpoints and log everytime there is a write to variables being watched. I tried to use this GDB feature but on my Ubuntu machine it doesn't seem to be working for now.
http://sourceware.org/gdb/onlinedocs/gdb/Python.html#Python
And, the language of applications is going to be nesC (I guess nesC is converted to C in the process of compilation) (and applications are going to run on TOSSIM like native apps on my computer).
See my paper on instrumenting codes using a program transformation systems (PTS) (PTS is a very general kind of "source-to-source compiler).
It shows how to install probes in code in a pretty straightforward way, once you have a grammar for the language of interest. The underlying tool, DMS, makes it fairly easy to define the grammar too.

Do you recommend Enabling Code Analysis for C/C++ on Build?

I'm using Visual Studio 2010, and in my C++/CLI project there are two Code Analysis settings:
Enable Code Analysis on Build
Enable Code Analysis for C/C++ on Build
My question is about the second setting.
I've enabled it and it takes a long time to run and it doesn't find much.
Do you recommend enabling this feature? Why?
The two options you specify control the automatic execution of Code Analysis on managed and native C++ respectively.
Code Analysis of managed code is performed by FXCop engine analyzing the generated IL.
Code Analysis of native code is performed during compilation by the PREFast engine analyzing the C++ source code.
I strongly encourage you to require your developers to have run CA on their code before checking it in. If you don't, you're:
Delaying the process of ensuring that your code has no known vulnerabilities and issues that could otherwise have been systematically removed from your product's source.
Denying your developers their right to improve their skills by learning incrementally what code they should not be writing and why.
Selling your customers short because they're the ones who will suffer from crashes and security issues when they're using your product.
Further, if you're writing native C++ and have not already planned to start adorning your code with SAL Annotations, then, frankly, someone at your place of work deserves to be dragged out into the street and humiliated! There's some great stuff coming down the pipe shortly in the next version of the SAL annotations - get on it now and be way ahead of the curve compared to your competitors! :)
Never did anything for me. In theory, it's supposed to help catch logical errors, but I've never found it to report anything.
We are using LINT to do a static code analysis for plain C++ applications (no .Net, no C++/CLI).
This is different from what you are using but probably the same principles can be applied.
We execute LINT like this:
During a build, only the changed sources (CPP files) are run through LINT. Possibly many more files are being recompiled (if a header file is changed), but only the changed .CPP files are run through LINT.
Run the static code analysis on all files on a Continuous Integration server. If it finds something, let it mail the error to the developers that most recently committed changes to the versioning system, or to the main developer.
What you could do additionally is to perform a static code analysis on all files that are committed to your versioning system. E.g. in Subversion you could do this in a commit-trigger.

Automated Dead code detection in native C++ application on Windows?

Background
I have an application written in native C++ over the course of several years that is around 60 KLOC. There are many many functions and classes that are dead (probably 10-15% like the similar Unix based question below asked). We recently began doing unit testing on all new code and applying it to modified code whenever possible. However, I would make a SWAG that we have less than 5% test coverage at the present moment.
Assumptions/Constraints
The method and/or tools must support:
Native (i.e. unmanaged) C++
Windows XP
Visual Studio 2005
Must not require user supplied test cases for coverage. (e.g. can't depend on unit tests to generate code coverage)
If the methods support more than these requirements, then great.
NOTE: We currently use the Professional edition of Visual Studio 2005, not the Team System. Therefore, using Team System might be a valid suggestion (I don't know, I've never used it) however I'm hoping it is not the only solution.
Why using unit tests for code coverage is problematic
I believe that it is impossible for a generic tool to find all the dead (e.g. unreachable code) in any arbitrary application with zero false positives (I think this would be equivalent to the Halting problem). However, I also believe it is possible for a generic tool to find many types of dead code that are highly probable to in fact be dead, like classes or functions which are never reference in the code by anything else.
By using unit tests to provide this coverage, you no longer using a generic algorithm and are thus increasing both the percentage of dead code you can detect and the probability that any hits are not false positives. Conversely, using unit tests could result in false negatives since the unit tests themselves might be the only thing exercising a given piece of code. Ideally, I would have regression testing that exercises all externally available methods, APIs, user controls, etc. which would serve as a baseline measurement of code coverage analysis to rule out certain methods from being false positives. Sadly however, I do not have this automated testing at the present time.
Since I have such a large code base with such a low test case coverage percentage however, I'm looking for something that could help without requiring huge amounts of time invested in writing test cases.
Question
How do you go about detecting dead code in an automated or semi-automated fashion in a native C++ application on the Windows platform with the Visual Studio 2005 development environment?
See Also
Dead code detection in legacy C/C++ project
I want tell the VC++ Compiler to compile all code. Can it be done?
Ask the linker to remove unreferenced objects (/OPT:REF). If you use function-level linking, and verbose linker output, the linker output will list every function it can prove is unused. This list may be far from complete, but you already have the tools needed.
We use Bullseye, and I can recommend it. It doesn't need to be run from a unit test environment, although that's what we do.
Use a code coverage tool against your unit test suite.