Unittest design pattern for static functions - c++

I am writing a piece of simple C++ class and trying to have a unittest case for the code.
The code is as simple as:
class Foo
{
static int EntryFunction(bool flag)
{
if(flag)
{
TryDownload();
}
else
{
TryDeleteFile();
}
return 0;
}
static void TryDownload()
{
// http download code
}
static void TryDeleteFile()
{
// delete file code
}
}
The issue is, according to the concept of UT, we cannot relay on the network connection. So the unittest cannot really run the download code. My ultimate goal is just to test the code path, for example: when pass in TRUE, the download code path should hit, otherwise the delete logic should hit. I was thinking to override this class so download and delete function can be override to just set a flag and noop, but the functions are static.
I was wondering in this case what will be a good way to test it?

I think it depends on what is in your TryDownload and TryDelete functions. If they use some other objects/functions to perform their tasks, you can configure simulations of those objects so that your TryDownload and TryDelete don't know their aren't really downloading/deleting anything.
If you don't have such objects/functions (and everything is contained in TryDownload/TryDelete), one might argue that the code is not well suited to unit testing because it can't be broken down into small units. In that case, your only option is an actual web service (perhaps running on the localhost) that lets those functions do their thing.

One of the ways which I can suggest is to use Google Mock library in your unit test framework.
Using Google mock library, you can do exactly what you have explained.

Related

Mocking method from golang package

I have been unable to find a solution to mocking methods from golang packages.
For example, my project has code that attempts to recover when Os.Getwd() returns an error. The easiest way I can thinking of making a unit test for this, is by mocking the Os.Getwd() method to return an error, and verify that the code works accordingly.
I tried using testify, but it does not seem to be possible.
Anyone have any experience with that?
My own solution was to take the method as an argument, which allow to inject a "mock" instead when testing. Additionnaly, create an exported method as public facade and an unexported one for testing.
Example:
func Foo() int {
return foo(os.Getpid)
}
func foo(getpid func() int) int {
return getpid()
}
Looks like that taking a look at the os.Getwd test could give you some example of how you could test your code. Look for the functions TestChdirAndGetwd and TestProgWideChdir.
From reading those, it seems that the tests create temporary folders.
So a pragmatic approach would be to create temporary folders, like the tests mentioned above do, then break them so os.Getwd throws an error for you to catch on your test.
Just be careful doing these operations as they can mess up your system. I'd suggest testing in a lightweight container or a virtual machine.
I know this is a bit late but, here is how you can do it.
Testing DAL or SystemCalls or package calls is usually difficult. My approach to solve this problem is to push your system function calls behind an interface and then mock the functions of those interface. For example.
type SystemCalls interface {
Getwd() error
}
type SystemCallsImplementation struct{
}
func (SystemCallsImplementation) Getwd() error{
return Os.Getwd()
}
func MyFunc(sysCall SystemCalls) error{
sysCall.Getwd()
}
With this you inject your interface that has the system calls to your function. Now you can easily create a mock implementation of your interface for testing.
like
type MockSystemCallsImplementation struct{
err error
}
func (MockSystemCallsImplementation) Getwd() error{
return err //this can be set to nil or some value in your test function
}
Hope this answers your question.
This is the limitation of go compiler, google developers don't want to allow any hooks or monkey patching. If unit tests are important for you - than you have to select a method of source code poisoning. All these methods are the following:
You can't use global packages directly.
You have to create isolated version of method and test it.
Production version of method includes isolated version of method and global package.
But the best solution is to ignore go language completely (if possible).

How to unit test file manager class?

I wrote a simple class to manage business objects.
class Manager
{
string[] GetNames();
BObject GetObject(string name);
void Saveobject(BObject obj);
}
It serializes /deserializes the objects as files on a local disk. I wrote Unit tests for the class and run them. That was fine so far. The problem happens when my test were run on build server because of file access permission I was not allowed to write files on the server. It's obvious I cannot test that way.
I think how to unit test this. One approach I can see is to extract an interface and creat a mock object for testing. But I want to test the class itself. How can I do it?
The class presumably calls file system operations File.OpenRead(), File.OpenWrite() etc. (I assume that this is C# due to the camel casing.) Then, you could create an interface for those operations, e.g.:
public interface IFileSystem {
StreamReader OpenRead(string fileName);
StreamWriter OpenWrite(string fileName);
}
and make the constructor of Manager take an instance of IFileSystem. Then, write a (non-mock) class that implements IFileSystem by calling the actual File.OpenRead() and File.OpenWrite() methods and use this one in the production code. In the tests, you use a mock framework, as mentioned by #Digger (my personal preference is Moq, but I haven't tried Rhino Mocks, so I have nothing negative to say about it) to mock out IFileSystem and use the mock to verify that the methods were called with the correct serialized data.
EDIT: Per request, an example in NUnit with Moq (I don't have an IDE here, so it's untested; feel free to correct it):
[Test]
public void BObjectShouldBeSerializedToFile() {
var fileSystemMock = new Mock<IFileSystem>();
var stream = new MemoryStream();
fileSystemMock.Setup(f => f.OpenWrite("theFileNameYouExpect.txt")).Returns(new StreamWriter(stream)).Verifiable();
var manager = new Manager(fileSystemMock.Object);
manager.SaveObject(new BObject(...));
stream.Seek(0, SeekOrigin.Begin);
Assert.That(...); // Perform asserts on the stream contents here
fileSystemMock.Verify(); // Not really necessary, but verify that `OpenWrite` was called
}
It depends on how much logic is contained in your class, in my opinion.
If there's some complicated logic inside your manager, it makes sense to abstract your file operations as per Aasmund's suggestion so that the logic can be tested independently of the file system. I do this when something is finicky enough to warrant the extra dependencies.
On the other hand, if there's very little logic other than calling into your serialization/deserialization code, then it's often acceptable to skip the unit tests and run integration tests that test the full cycle (create a BObject in memory, persist it via calling SaveObject, read it back out using GetObject, ensure that it is equal/equivalent to the one you persisted in the first place).
If your build environment can't run integration tests, then I'd look into setting it up so that it's possible.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}

Unit testing code that basically does persistence - Should I bother?

I am using an api which interacts with a db. This api has methods for querying, loading and saving elements to the db. I have written integration tests which do things like create a new instance, then check that when I do a query for that instance, the correct instance is found. This is all fine.
I would like to have faster running unit tests for this code but am wondering about the usefulness of any unit test and if they are actually giving me anything. for example, lets say I have a class for saving some element I have via the API. This is psuedo code, but get the idea of how the api I am using works across.
public class ElementSaver
{
private ITheApi m_api;
public bool SaveElement(IElement newElement, IElement linkedElement)
{
IntPtr elemPtr = m_api.CreateNewElement()
if (elemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.SetElementAttribute(elemPtr,newElement.AttributeName,newElement.AttributeValue)==false)
{
return false;
}
if (m_api.SaveElement(elemPtr)==false)
{
return false;
}
IntPtr linkedElemPtr = m_api.GetElementById(linkedElement.Id)
if (linkedElemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.LinkElements(elemPtr,linkedElemPtr)==false)
{
return false;
}
return true;
}
}
is it worth writing unit tests which mock out the m_api member? it seems that I can test that if any of the various calls fail that false is returned, and that if all of the various calls succeed that true is returned, and I could set expectations that the various methods are called with the expected parameters, but is this useful? If I were to refactor this code so that it used some slightly different methods of the api, but achieved the same result, this would break my tests and I would need to change them. This brittleness doesn't seem very useful.
Should I bother with unit tests for code like this, or should I just stick with the integration tests that I've got?
Look at what the tests are like. If you're only testing if stuff that comes in ends up in the database etc. Your probably doing the right thing by only doing automated integration tests. If there's logic you want to test then you might want to look if you can factor out your logic into separate classes that you can unit test and facades around the infrastructure code that contain no logic.
It is a good idea to mock out m_api in my code for the following reasons (not all of which apply to your psuedo-code example):
As you mentioned, you can verify that your class performs error handling properly
In cases where you have more complex code in your class (e.g., cacheing), you can use expectations on your mock to ensure that the class is behaving properly. For example, retrieve the same object twice but ensure that m_api is only called once.
Your unit test can test behavior without creating an appropriate data set. This increase maintainability over time as the data model underneath m_api changes.