I am using the output parameter of google test like
--gtest_output=xml:Reports\fooReport.xml
However some XML attributes like author and project won´t be exported as the tool gtest2html might expect.
Is there a property like 'author' available in gtest at all? If so, where would I assign it?
Explanation
In GoogleTest, those fields that you see in gtest2html like 'author', 'project', etc. are not available in the xml output by default. Those are custom field expected to be found by gtest2html.
However, you can add them to the same xml element using RecordProperty function provided by GoogleTest.
The final section of the documentation explains:
Calling RecordProperty() outside of the lifespan of a test is allowed. If it's called outside of a test but between a test suite's SetUpTestSuite() and TearDownTestSuite() methods, it will be attributed to the XML element for the test suite. If it's called outside of all test suites (e.g. in a test environment), it will be attributed to the top-level XML element.
So, in order to make your xml output compatible with gtest2html, you need those fields in the outermost xml element (which has the tag "testsuites").
Solution
So, in main or in a test environment (you don't have to use one, but it is strongly recommended in the documentation), before any test is started, you need to add below call:
::testing::Test::RecordProperty ("project", "MyProject");
This will then add this to the xml output:
<testsuites ... project="MyProject" name="AllTests">
(I know it is quite late as an answer, but hopefully this will help someone)
Here's what my code looks like.
/**
* #param {string} css
* #param {string} [otherParam]
*/
const addStyle = (css, otherParam) => {
// Do stuff...
};
addStyle('#element{font-weight:bold;}', 'Some string of text');
Here are the current manual and theoretical ways I know to accomplish this.
1. Manual Injection
This works great, but I'd like it to be automatic.
Step 1:
Step 2:
Result:
2. Injection Comment
This won't work because it injects CSS into all arguments of the function.
The only way to get it to work is by putting each argument on a separate line like this, but again I'd like it to be automatic without the need to insert the comment line each time. This is the method I currently use.
3. Language Injection Settings
I tried messing around with this for a while, but I can't seem to figure it out.
+ jsLiteralExpression().withSuperParent(2, jsLiteralExpression().withText(string().startsWith("addStyle(")))
Custom Language Injection rule seems to work just fine here for your test code. Tested in a plain JavaScript file using PhpStorm 2020.1.2 on Windows 10:
The actual rule:
+ jsArgument(jsReferenceExpression().withQualifiedName("addStyle"), 0)
When using C++ with doxygen I would like to add to the class description from within a function. I basically want to add information about the function calls I am making.
class_name.h
/**
* This is the overall description of the class
*/
class ClassName
{
...
}
class_name.cpp:
void ClassName::randomFunction()
{
/**
* #class ClassName
*
* calls testData on stuff (this should be appended to the class description)
*/
testData(stuff);
}
Doxygen output:
<b>Detailed Description</b>
<br>
This is the overall description of the class
<br>
calls testData on stuff
This method works when I put the comment outside of a function, but does not show up anywhere if I put it within randomFunction as the example shows. In the end, I would like the reader of the documentation to see a description of the class followed by the snippet that I have in the example. This makes it easier to keep my documentation in sync with the code and immediately tells the user about important functions that I am calling.
The reason I want to do this is to document the network messages that the class makes in one place instead of having the user search through documentation on multiple member functions.
EDIT:
doxygen version is 1.8.5
added clarification
The used version of doxygen (1.8.5, August 23, 2013) is a bit old and it is advised to update to the current version (1.8.17).
To have code snippets or documentation snippets in an other place as well doxygen has the command \snippet (see http://doxygen.nl/manual/commands.html#cmdsnippet).
To group information in different places doxygen has grouping commands like \defgroup (http://doxygen.nl/manual/commands.html#cmddefgroup), \ingroup (http://doxygen.nl/manual/commands.html#cmdingroup), \addtogroup (http://doxygen.nl/manual/commands.html#cmdaddtogroup).
See also the grouping chappetr in the doxgen documentation (http://doxygen.nl/manual/grouping.html).
I have a project which contains an RPC-invokable API. The project also contains some other code. I'd like to generate a clean documentation just for these API functions (for users working with the API). Yet it should still be possible to generate a documentation of the whole program, intended for the developers.
The API is split into several classes ("command modules"). First thing I did is tell Doxygen to only look at those files, of course. But these classes also have some code which is not part of the API I'd like to generate the documentation for (helper functions).
These invokable functions ("commands") have a special return type: CommandResult.
As an example, here is such an API class:
class CommandModuleFoo {
int privateHelperFunction();
int privateMember;
public:
int publicHelperFunction();
int publicMember;
public slots:
/** Some documentation. */
CommandResult myFunction1(int someArg);
/** Some documentation. */
CommandResult myFunction2();
};
Now the documentation should basically contain the following:
class CommandModuleFoo
Public members:
CommandResult myFunction1(int someArg)
Some documentation.
CommandResult myFunction2()
Some documentation.
Question:
I know that I can select only a subset of the project's files by simply just naming them in the INPUT variable of my Doxyfile. But can I also select only a set of functions using a pattern?
Since I guess this is not possible, can I tell Doxygen to only generate documentation for one section? Doxygen has section markers: \if...\endif can be used to exclude some part of the document but include them with the configuration variable ENABLED_SECTIONS. But I need the opposite. Is there something like ONLY_SECTION?
Possible workarounds:
I could use above-mentioned section conditions around every code except the commands I want to document. But that sounds very ugly.
I could set HIDE_UNDOC_MEMBERS to YES in order to only generate documentation for documented members, but that would make it impossible to generate also a full documentation of the program, if one wants to. (i.e. it forbids to document non-API functions). Also, detecting undocumented API-functions is then more difficult.
I currently use the second workaround.
You could do the following:
Use \internal in the documentation of internal functions (the non-commands in this case):
/**
* \internal
* This helper function does foo.
*/
void myHelperFunction();
Your commands use normal doxygen comments:
/**
* Some documentation.
*/
CommandResult myFunction();
Then, in your Doxyfile use INTERNAL_DOCS = YES if you compile the documentation of the whole program and INTERNAL_DOCS = NO (default) to compile your API documentation.
Basically, treat your program like it was a library. Library developers often encounter this problem: they obviously want to have a clean documentation for the users of the library which only contains exported stuff, while they (probably) also want to have a more verbose documentation for developers of the library.
PS. You can also use \internal to selectively include paragraphs of the documentation of a function, i.e. your documentation for the developers could include some more detail which is not important for the users of your API. For details consult the documentation of \internal.
I am writing a parser which generates the 32 bit opcode for each command. For example, for the following statement:
set lcl_var = 2
my parser generates the following opcodes:
// load immdshort 2 (loads the value 2)
0x10000010
// strlocal lclvar (lcl_var is converted to an index to identify the var)
0x01000002
Please note that lcl_var can be anything i.e., any variable can be given. How can I write the unit test cases for this? Can we avoid hard coding the values? Is there a way to make it generic?
It depends on how you structured your parser. A Unit-Test tests a single UNIT.
So, if you want to test your entire parser as a single unit, you can give it a list of commands and verify it produces the correct opcodes (which you checked manually when you wrote the test). You can write tests for each command, and test the normal usage, edge-case usage, just-beyond-edge-case usage. For example, test that:
set lcl_var = 2
results in:
0x10000010
0x01000002
And the same for 0, -1, MAX_INT-1, MAX_INT+1, ...
You know the correct result for these values. Same goes for different variables.
If your question is "How do I run the same test with different inputs and expected values without writing one xUnit test per input-output combination?"
Then the answer to that would be to use something like the RowTest NUnit extension. I wrote a quick bootup post on my blog recently.
An example of this would be
[TestFixture]
public class TestExpression
{
[RowTest]
[Row(" 2 + 3 ", "2 3 +")]
[Row(" 2 + (30 + 50 ) ", "2 30 50 + +")]
[Row(" ( (10+20) + 30 ) * 20-8/4 ", "10 20 + 30 + 20 * 8 4 / -")]
[Row("0-12000-(16*4)-20", "0 12000 - 16 4 * - 20 -")]
public void TestConvertInfixToPostfix(string sInfixExpr, string sExpectedPostfixExpr)
{
Expression converter = new Expression();
List<object> postfixExpr = converter.ConvertInfixToPostfix(sInfixExpr);
StringBuilder sb = new StringBuilder();
foreach(object term in postfixExpr)
{
sb.AppendFormat("{0} ", term.ToString());
}
Assert.AreEqual(sExpectedPostfixExpr, sb.ToString().Trim());
}
int[] opcodes = Parser.GetOpcodes("set lcl_var = 2");
Assert.AreEqual(2, opcodes.Length);
Assert.AreEqual(0x10000010, opcodes[0]);
Assert.AreEqual(0x01000002, opcodes[1]);
You don't specify what language you're writing the parser in, so I'm going to assume for the sake of argument that you're using an object-oriented language.
If this is the case, then dependency injection could help you out here. If the destination of the emitted opcodes is an instance of a class (like File, for instance), try giving your emitter class a constructor that takes an object of that type to use as the destination for emitted code. Then, from a unit test, you can pass in a mock object that's an instance of a subclass of your destination class, capture the emitted opcodes for specific statements, and assert that they are correct.
If your destination class isn't easily extensible, you may want to create an interface based on it that both the destination class and your mock class can implement.
As I understand it, you would first write a test for your specific example, i.e. where the input to your parser is:
set lcl_var = 2
and the output is:
0x10000010 // load immdshort 2
0x01000002 // strlocal lclvar
When you have implemented the production code to pass that test, and refactored it, then if you are not satisified it could handle any local variable, write another test with a different local variable and see if it passes or not. e.g. new test with input:
set lcl_var2 = 2
And write your new test to expect the different output that you want. Keep doing this until you are satisfied that your production code is robust enough.
It's not clear if you are looking for a methodology or a specific technology to use for your testing.
As far as methodology goes maybe you don't want to do extensive unit testing. Perhaps a better approach would be to write some programs in your domain specific language and then execute the opcodes to produce a result. The test programs would then check this result. This way you can exercise a bunch of code, but check only one result at the end. Start with simple ones to flush out obvious bugs and the move to harder ones. Instead of checking the generated opcodes each time.
Another approach to take is to automatically generate programs in your domain specific language along with the expected opcodes. This can be very simple like writing a perl script that produces a set of programs like:
set lcl_var = 2
set lcl_var = 3
Once you have a suite of test programs in your language that have correct output you can go backwards and generate unit tests that check each opcode. Since you already have the opcodes it becomes a matter of inspecting the output of the parser for correctness; reviewing its code.
While I've not used cppunit, I've used an in-house tool that was very much like cppunit. It was easy to implement unit tests using cppunit.
What do you want to test? Do you want to know whether the correct "store" instruction is created? Whether the right variable is picked up? Make up your mind what you want to know and the test will be obvious. As long as you don't know what you want to achieve, you will not know how to test the unknown.
In the meantime, just write a simple test. Tomorrow or some later day, you will come to this place again because something broke. At that time, you will know more about what you want to do and it might be more simple to design a test.
Today, don't try to be the person you will be tomorrow.