Is there a way to specify an NUnit test as "extra credit"? - unit-testing

I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?

I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!

Related

c++ best way to realise global switches/flags to control program behaviour without tying the classes to a common point

Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");

Should I write a separate method for any possible parameter value

I am very new to unit testing concept and I stick with writing my first one.
I have a method for normalize ID value. It should return passed value for any positive number (even if it is string with number inside) and zero (0) for any other passed value.
function normalizeId($val) {
// if $val is good positive number return $val;
// else return 0;
}
I want to write a unit test for this function and have assertion to any possible arguments. For example:
5, -5, 0, "5", "-5", 3.14, "fff", new StdClass() etc.
Should I write a method in my TestCase class for any of this condition or have one method with all conditions on separate lines?
I.e.
public function testNormalizeId() {
$this->assertEquals(5, MyClass::normalizeId(5));
$this->assertEquals(0, MyClass::normalizeId(-5));
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
or
public function testNormalizeId_IfPositiveInt_GetPositiveInt() {
$this->assertEquals(5, MyClass::normalizeId(5));
}
public function testNormalizeId_IfNegativeInt_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId(-5));
}
public function testNormalizeId_IfNotIntAsString_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
How about best practices? I hear that the second choice is good but I'm worry about very many methods for very many possible parameter values. It can be positive number, negative number, zero, string with positive number inside, string with negative number inside, string with float inside etc etc.
Edit
Or maybe the third approach with provider?
public function testNormalizeIdProvider()
{
return array(
array(5, 5),
array(-5, 0),
array(0, 0),
array(3.14, 0),
array(-3.14, 0),
array("5", 5),
array("-5", 0),
array("0", 0),
array("3.14", 0),
array("-3.14", 0),
array("fff", 0),
array("-fff", 0),
array(true, 0),
array(array(), 0),
array(new stdClass(), 0),
);
}
/**
* #dataProvider testNormalizeIdProvider
*/
public function testNormalizeId($provided, $expected)
{
$this->assertEquals($expected, MyObject::normalizeId($provided));
}
I'm not very knowledgeable about PHP nor the unit testing frameworks that you can use therein, but in the general sphere of Unit Testing I'd recommend the second approach for these reasons
Gives a specific test case fail for a particular type of input rather than having to trawl through the actual Assert failure message to figure out which one failed.
Makes it much easier to parametrize these tests if you decide that you need to perform tests on a specific type of conversion with more than one input (e.g if you decided to have a text file containing 1,000 random strings and wanted to load these up in a test driver and run the test case for converting strings for each entry by way of functional or acceptance testing later on)
Makes it easier to change out the individual test cases for when you need some special logic to setup
Makes it easier to spot when you've missed a type of conversion because the method names read off easier against a checklist :)
(Dubious) Will maybe make it easier to spot where your "god class" might be in need of internal refactoring to use separate sub-classes to perform specific types of conversions (not saying your approach is wrong but you might find the logic for one type of conversion very nasty; when you review your 20 or 30 individual test cases that could provide the impetus to bite the bullet and develop more specialized converter classes)
Hope that helps.
Use the data provider, as you discovered yourself. There is no benefit in duplicating the exact testcase in multiple methods with only having parameters and expectations change.
Personally, I do really start with the tests all in one method for such simple cases. I'd start with a simple good case, and then gradually adding more cases. I may not feel the need to change this into a data provider instantly, because it won't pay off instantly - but on the other hand things can change, and this test structure can be a short term solution that needs refactoring.
So whenever you observe yourself adding more lines of test into such a multi test case method, stop and make it using a data provider instead.

Unit testing - log and then fail?

I am used to test drive my code. Now that I am new to Go I am trying to get it right as fast as possible. I am using the testing package in the standard library which seem to be good enough. (I also like that it is not yet another external dependency. We are currently at 2 dependencies overall - compared to any Java- or Ruby project.....) Anyways - it looks like an assert in golang looks like this:
func TestSomething(t *testing.T) {
something := false
if something {
t.Log("Oh noes - something is false")
t.Fail()
}
}
I find this verbose and would like to do it on one line instead:
Assert( something, "Oh noes - something is false" )
or something like that. I hope that I have missed something obvious here. What is the best/idiomatic way to do it in go?
UPDATE: just to clarify. If I was to do something like this:
func AssertTrue(t *testing.T, value bool, message string) {
if value {
t.Log(message)
t.Fail()
}
}
and then write my test like this
func TestSomething(t *testing.T) {
something := false
AssertTrue(t, something, "Oh noes - something is false")
}
then it would not be the go way to do it?
There are external packages that can be integrated with the stock testing framework.
One of them I wrote long ago, gocheck, was intended to sort that kind of use case.
With it, the test case looks like this, for example:
func (s *Suite) TestFoo(c *gocheck.C) {
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
}
You'd run that as usual, with go test, and the failure in that check would be reported as:
----------------------------------------------------------------------
FAIL: foo_test.go:34: Suite.TestFoo
all_test.go:34:
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
... obtained string = "" +
... "line 1\n" +
... "line 2"
... expected string = "line 3"
Note how the comment right above the code was included in the reported failure.
There are also a number of other usual features, such as suite and test-specific set up and tear down routines, and so on. Please check out the web page for more details.
It's well maintained as I and other people use it in a number of active projects, so feel free to join on board, or follow up and check out the other similar projects that suit your taste more appropriately.
For examples of gocheck use, please have a look at packages such as mgo, goyaml, goamz, pipe, vclock, juju (massive code base), lpad, gozk, goetveld, tomb, etc. Also gocheck, manages to test itself. It was quite fun to bootstrap that.
But when You try write test like Uncle Martin, with one assert in test and long function names, then simple assert library, like http://github.com/stretchr/testify/assert can make it much faster and easier
I discourage writing test in the way you seem to have desire for. It's not by chance that the whole stdlib uses the, as you call it, "verbose" way.
It is undeniably more lines, but there are several advantages to this approach.
If you read Why does Go not have assertions? and s/error handling/test failure reporting/g you can get a picture of why the several "assert" packages for Go testing are not a good idea to use,
Once again, the proof is the huge code base of the stdlib.
The idiomatic way is the way you have above. Also, you don't have to log any message if you don't desire.
As defined by the GO FAQ:
Why does Go not have assertions?
Go doesn't provide assertions. They are undeniably convenient, but our
experience has been that programmers use them as a crutch to avoid
thinking about proper error handling and reporting. Proper error
handling means that servers continue operation after non-fatal errors
instead of crashing. Proper error reporting means that errors are
direct and to the point, saving the programmer from interpreting a
large crash trace. Precise errors are particularly important when the
programmer seeing the errors is not familiar with the code.
We understand that this is a point of contention. There are many
things in the Go language and libraries that differ from modern
practices, simply because we feel it's sometimes worth trying a
different approach.
UPDATE
Based on your update, that is not idiomatic Go. What you are doing is in essence designing a test extension framework to mirror what you get in the XUnit frameworks. While there is nothing fundamentally wrong, from an engineering perspective, it does raise questions as to the value + cost of maintaining this extension library. Additionally, you are creating an in-house standard that will potentially ruffle feathers. The biggest thing about Go is it is not C or Java or C++ or Python and things should be done the way the language is constructed.

How to test asynchronuous code

I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.