Documenting Unit Tests in line with code - unit-testing

I'm trying to use doxygen to document my unit tests, but I'd like to document them in-line with the code instead of in the test header to reduce copy/paste errors when making similar tests. Of note, I'm using the RTF output format.
/** #brief A method for testing doxygen method documentation
* #test
* -#Step 1
* -#Step 2
* -#Step 3
*/
[TestMethod()]
public void DoxygenScratchPadInHeader()
{
// code that may or may not be in sync with header
}
/** #brief A method for testing doxygen method documentation
* #test
*/
[TestMethod()]
public void DoxygenScratchPadInLine()
{
/// #par
/// -# Initialize the value to 0
int i = 0;
/// #par
/// -# Add a number
i += 3;
/// #par
/// -# Assert that the number is three
Assert.AreEqual(3, i);
}
Test List Output:
Member UpdateProtocolQATests.CUpdateProtocolTest.DoxygenScratchPadInHeader ()
Step 1
Step 2
Step 3
Member UpdateProtocolQATests.CUpdateProtocolTest.DoxygenScratchPadInLine ()
{note no steps here}
Function description output:
void UpdateProtocolQATests.CUpdateProtocolTest.DoxygenScratchPadInHeader ()
A method for testing doxygen method documentation.
Test:
Step 1
Step 2
Step 3
void UpdateProtocolQATests.CUpdateProtocolTest.DoxygenScratchPadInLine ()
A method for testing doxygen method documentation.
Test:
1. Initialize the value to 0
1. Add a number
1. Assert that the number is three
{displaying last bit as code because stackoverflow is correcting the repeated 1. to 1. 2. 3... which is what I actually want in the end...}
Any better ideas for implementing in-line test step documentation? I don't care so much about the steps not appearing in the test list, we can live with just the references to the functions.

I'm quite sympathetic to your plight, but as far as I can tell, Doxygen really is only designed to document specific code-objects (files, classes, namespaces, variables, etc) rather than arbitrary lines of code.
At the moment, the only possibility I can think of for circumventing this shortcoming is to generate comments that include the actual code that you're wanting to document via the \code command.
There are two ways I can think of for accomplishing this:
Put some sort of special string (say, for instance, DOXY_INLINE_CODE) in the Doxy comments that should be associated with a single line of code. Then write a filter (see FILTER_PATTERNS) to replace this string with \code <nextline> \endcode where <nextline> is the next line of code that the filter sees. I'm not sure where Doxygen would put these comments or how they'd look; they might be pretty ugly, unfortunately. (One odd behavior I dislike is that the \code command seems to strip leading spaces, so you wouldn't get indentation to work correctly.)
Write a "Doxygen runner" that would automatically generate .dox files from your code before calling doxygen. These auto-generated .dox files would contain documentation generated from corresponding .cpp or other source files. You can use various Doxygen commands to link back to the documentation of the actual source code, and you can also insert a copy of the .dox documentation in the source code documentation (or vice-versa).
These are hacks, and you'd have to fiddle around with Doxygen quite a bit to get it to handle this case nicely, but I hope these suggestions help somewhat. Best of luck. (I'm currently working on doing something similar to get Doxygen to nicely document Google Tests, also in the context of a project for a highly-regulated industry.)

I remembered coming across this question when I was looking for a similar solution. I wanted to document user testing procedures as close as possible to their corresponding unit tests or groups of unit tests. The following is a subset of the solution we implemented with Doxygen groups/sub-groups.
A separate manual-test.dox file is defined to create a top-level group and several sub-groups under which specific manual tests are collected.
/**
#defgroup manualtest Manual Testing Instructions
#{
This section contains a collection of manual testing...
#defgroup menutest Menu Tests
#defgroup exporttest Import/Export Tests
#}
*/
The following shows a sample of a Java unit test class with unit test documentation and manual testing instructions.
public MenuTests {
...
/**
* #addtogroup menutest
* **Open File Test**
*
* The purpose of this test is to...
*
* -# Do something
* -# Verify something
*/
/**
* This unit test verifies that the given file can be created via
* the File->Open... menu handler. It...
*/
#Test
public void open_file_test() {
...
}
}
The resulting HTML documentation will include a Manual Testing Instruction page
under the Modules section. Said page will contain markup details as given in manual-test.dox and links to module pages for each of the defined sub-groups, such as Menu Tests.
The Menu Tests page will show all manual unit testing steps added to this
sub-group, thereby providing a separate document that can be included
by reference as part of a Software Test Plan or User Test Plan.
The only caveat is that there are no means to explicitly define the order in which test instructions are added to groups. When defined in a single class, they are added in the order they are defined and multiple classes are parsed in alphabetic order.
For projects that required more control over how tests are collected, Doxygen is used to create XML output. Test cases are extracted using an XSLT template and ordered as needed, but that is another question off its own.

Use a tool which supports document generation from inline comments:
NaturalDocs
ASCIIDoctorJ
Doxygen has helpers for perl, which is what NaturalDocs is written in.

Related

C++ Google test - exporting additional information like 'author' and 'project' to XML report

I am using the output parameter of google test like
--gtest_output=xml:Reports\fooReport.xml
However some XML attributes like author and project won´t be exported as the tool gtest2html might expect.
Is there a property like 'author' available in gtest at all? If so, where would I assign it?
Explanation
In GoogleTest, those fields that you see in gtest2html like 'author', 'project', etc. are not available in the xml output by default. Those are custom field expected to be found by gtest2html.
However, you can add them to the same xml element using RecordProperty function provided by GoogleTest.
The final section of the documentation explains:
Calling RecordProperty() outside of the lifespan of a test is allowed. If it's called outside of a test but between a test suite's SetUpTestSuite() and TearDownTestSuite() methods, it will be attributed to the XML element for the test suite. If it's called outside of all test suites (e.g. in a test environment), it will be attributed to the top-level XML element.
So, in order to make your xml output compatible with gtest2html, you need those fields in the outermost xml element (which has the tag "testsuites").
Solution
So, in main or in a test environment (you don't have to use one, but it is strongly recommended in the documentation), before any test is started, you need to add below call:
::testing::Test::RecordProperty ("project", "MyProject");
This will then add this to the xml output:
<testsuites ... project="MyProject" name="AllTests">
(I know it is quite late as an answer, but hopefully this will help someone)

In WebStorm, is there a way to automatically inject a language into an argument of a particular function?

Here's what my code looks like.
/**
* #param {string} css
* #param {string} [otherParam]
*/
const addStyle = (css, otherParam) => {
// Do stuff...
};
addStyle('#element{font-weight:bold;}', 'Some string of text');
Here are the current manual and theoretical ways I know to accomplish this.
1. Manual Injection
This works great, but I'd like it to be automatic.
Step 1:
Step 2:
Result:
2. Injection Comment
This won't work because it injects CSS into all arguments of the function.
The only way to get it to work is by putting each argument on a separate line like this, but again I'd like it to be automatic without the need to insert the comment line each time. This is the method I currently use.
3. Language Injection Settings
I tried messing around with this for a while, but I can't seem to figure it out.
+ jsLiteralExpression().withSuperParent(2, jsLiteralExpression().withText(string().startsWith("addStyle(")))
Custom Language Injection rule seems to work just fine here for your test code. Tested in a plain JavaScript file using PhpStorm 2020.1.2 on Windows 10:
The actual rule:
+ jsArgument(jsReferenceExpression().withQualifiedName("addStyle"), 0)

How to add documentation to the class description from a comment within a function in doxygen?

When using C++ with doxygen I would like to add to the class description from within a function. I basically want to add information about the function calls I am making.
class_name.h
/**
* This is the overall description of the class
*/
class ClassName
{
...
}
class_name.cpp:
void ClassName::randomFunction()
{
/**
* #class ClassName
*
* calls testData on stuff (this should be appended to the class description)
*/
testData(stuff);
}
Doxygen output:
<b>Detailed Description</b>
<br>
This is the overall description of the class
<br>
calls testData on stuff
This method works when I put the comment outside of a function, but does not show up anywhere if I put it within randomFunction as the example shows. In the end, I would like the reader of the documentation to see a description of the class followed by the snippet that I have in the example. This makes it easier to keep my documentation in sync with the code and immediately tells the user about important functions that I am calling.
The reason I want to do this is to document the network messages that the class makes in one place instead of having the user search through documentation on multiple member functions.
EDIT:
doxygen version is 1.8.5
added clarification
The used version of doxygen (1.8.5, August 23, 2013) is a bit old and it is advised to update to the current version (1.8.17).
To have code snippets or documentation snippets in an other place as well doxygen has the command \snippet (see http://doxygen.nl/manual/commands.html#cmdsnippet).
To group information in different places doxygen has grouping commands like \defgroup (http://doxygen.nl/manual/commands.html#cmddefgroup), \ingroup (http://doxygen.nl/manual/commands.html#cmdingroup), \addtogroup (http://doxygen.nl/manual/commands.html#cmdaddtogroup).
See also the grouping chappetr in the doxgen documentation (http://doxygen.nl/manual/grouping.html).

Generate doc only for some member functions

I have a project which contains an RPC-invokable API. The project also contains some other code. I'd like to generate a clean documentation just for these API functions (for users working with the API). Yet it should still be possible to generate a documentation of the whole program, intended for the developers.
The API is split into several classes ("command modules"). First thing I did is tell Doxygen to only look at those files, of course. But these classes also have some code which is not part of the API I'd like to generate the documentation for (helper functions).
These invokable functions ("commands") have a special return type: CommandResult.
As an example, here is such an API class:
class CommandModuleFoo {
int privateHelperFunction();
int privateMember;
public:
int publicHelperFunction();
int publicMember;
public slots:
/** Some documentation. */
CommandResult myFunction1(int someArg);
/** Some documentation. */
CommandResult myFunction2();
};
Now the documentation should basically contain the following:
class CommandModuleFoo
Public members:
CommandResult myFunction1(int someArg)
Some documentation.
CommandResult myFunction2()
Some documentation.
Question:
I know that I can select only a subset of the project's files by simply just naming them in the INPUT variable of my Doxyfile. But can I also select only a set of functions using a pattern?
Since I guess this is not possible, can I tell Doxygen to only generate documentation for one section? Doxygen has section markers: \if...\endif can be used to exclude some part of the document but include them with the configuration variable ENABLED_SECTIONS. But I need the opposite. Is there something like ONLY_SECTION?
Possible workarounds:
I could use above-mentioned section conditions around every code except the commands I want to document. But that sounds very ugly.
I could set HIDE_UNDOC_MEMBERS to YES in order to only generate documentation for documented members, but that would make it impossible to generate also a full documentation of the program, if one wants to. (i.e. it forbids to document non-API functions). Also, detecting undocumented API-functions is then more difficult.
I currently use the second workaround.
You could do the following:
Use \internal in the documentation of internal functions (the non-commands in this case):
/**
* \internal
* This helper function does foo.
*/
void myHelperFunction();
Your commands use normal doxygen comments:
/**
* Some documentation.
*/
CommandResult myFunction();
Then, in your Doxyfile use INTERNAL_DOCS = YES if you compile the documentation of the whole program and INTERNAL_DOCS = NO (default) to compile your API documentation.
Basically, treat your program like it was a library. Library developers often encounter this problem: they obviously want to have a clean documentation for the users of the library which only contains exported stuff, while they (probably) also want to have a more verbose documentation for developers of the library.
PS. You can also use \internal to selectively include paragraphs of the documentation of a function, i.e. your documentation for the developers could include some more detail which is not important for the users of your API. For details consult the documentation of \internal.

How can I best write unit test cases for a Parser?

I am writing a parser which generates the 32 bit opcode for each command. For example, for the following statement:
set lcl_var = 2
my parser generates the following opcodes:
// load immdshort 2 (loads the value 2)
0x10000010
// strlocal lclvar (lcl_var is converted to an index to identify the var)
0x01000002
Please note that lcl_var can be anything i.e., any variable can be given. How can I write the unit test cases for this? Can we avoid hard coding the values? Is there a way to make it generic?
It depends on how you structured your parser. A Unit-Test tests a single UNIT.
So, if you want to test your entire parser as a single unit, you can give it a list of commands and verify it produces the correct opcodes (which you checked manually when you wrote the test). You can write tests for each command, and test the normal usage, edge-case usage, just-beyond-edge-case usage. For example, test that:
set lcl_var = 2
results in:
0x10000010
0x01000002
And the same for 0, -1, MAX_INT-1, MAX_INT+1, ...
You know the correct result for these values. Same goes for different variables.
If your question is "How do I run the same test with different inputs and expected values without writing one xUnit test per input-output combination?"
Then the answer to that would be to use something like the RowTest NUnit extension. I wrote a quick bootup post on my blog recently.
An example of this would be
[TestFixture]
public class TestExpression
{
[RowTest]
[Row(" 2 + 3 ", "2 3 +")]
[Row(" 2 + (30 + 50 ) ", "2 30 50 + +")]
[Row(" ( (10+20) + 30 ) * 20-8/4 ", "10 20 + 30 + 20 * 8 4 / -")]
[Row("0-12000-(16*4)-20", "0 12000 - 16 4 * - 20 -")]
public void TestConvertInfixToPostfix(string sInfixExpr, string sExpectedPostfixExpr)
{
Expression converter = new Expression();
List<object> postfixExpr = converter.ConvertInfixToPostfix(sInfixExpr);
StringBuilder sb = new StringBuilder();
foreach(object term in postfixExpr)
{
sb.AppendFormat("{0} ", term.ToString());
}
Assert.AreEqual(sExpectedPostfixExpr, sb.ToString().Trim());
}
int[] opcodes = Parser.GetOpcodes("set lcl_var = 2");
Assert.AreEqual(2, opcodes.Length);
Assert.AreEqual(0x10000010, opcodes[0]);
Assert.AreEqual(0x01000002, opcodes[1]);
You don't specify what language you're writing the parser in, so I'm going to assume for the sake of argument that you're using an object-oriented language.
If this is the case, then dependency injection could help you out here. If the destination of the emitted opcodes is an instance of a class (like File, for instance), try giving your emitter class a constructor that takes an object of that type to use as the destination for emitted code. Then, from a unit test, you can pass in a mock object that's an instance of a subclass of your destination class, capture the emitted opcodes for specific statements, and assert that they are correct.
If your destination class isn't easily extensible, you may want to create an interface based on it that both the destination class and your mock class can implement.
As I understand it, you would first write a test for your specific example, i.e. where the input to your parser is:
set lcl_var = 2
and the output is:
0x10000010 // load immdshort 2
0x01000002 // strlocal lclvar
When you have implemented the production code to pass that test, and refactored it, then if you are not satisified it could handle any local variable, write another test with a different local variable and see if it passes or not. e.g. new test with input:
set lcl_var2 = 2
And write your new test to expect the different output that you want. Keep doing this until you are satisfied that your production code is robust enough.
It's not clear if you are looking for a methodology or a specific technology to use for your testing.
As far as methodology goes maybe you don't want to do extensive unit testing. Perhaps a better approach would be to write some programs in your domain specific language and then execute the opcodes to produce a result. The test programs would then check this result. This way you can exercise a bunch of code, but check only one result at the end. Start with simple ones to flush out obvious bugs and the move to harder ones. Instead of checking the generated opcodes each time.
Another approach to take is to automatically generate programs in your domain specific language along with the expected opcodes. This can be very simple like writing a perl script that produces a set of programs like:
set lcl_var = 2
set lcl_var = 3
Once you have a suite of test programs in your language that have correct output you can go backwards and generate unit tests that check each opcode. Since you already have the opcodes it becomes a matter of inspecting the output of the parser for correctness; reviewing its code.
While I've not used cppunit, I've used an in-house tool that was very much like cppunit. It was easy to implement unit tests using cppunit.
What do you want to test? Do you want to know whether the correct "store" instruction is created? Whether the right variable is picked up? Make up your mind what you want to know and the test will be obvious. As long as you don't know what you want to achieve, you will not know how to test the unknown.
In the meantime, just write a simple test. Tomorrow or some later day, you will come to this place again because something broke. At that time, you will know more about what you want to do and it might be more simple to design a test.
Today, don't try to be the person you will be tomorrow.