Executing some code only when calling from unit tests - unit-testing

I am developing one Windows phone 7 application. It has another project for unit testing. Now in the main application I have one method that executes some code to navigate to another page. While unit testing I don't want that method to execute and just return in the first line. Right now how I am doing it is defining a symbol while unit testing.
#define EnableUnitTest
public static void Navigate(string page)
{
#if EnableUnitTest
return;
#endif
.....
.....
}
Is there a better way of doing it.

Sorry to say it but that seems like a very bad design. In a unit test you are testing some code which shouldn't be conditioned that way. A unit test should consist of three steps: Arrange, Act, Assert where you prepare the input, call the actual method and assert the results. It's in the arrange step that you should prepare the method under test.
Also I notice that this method is static. Static methods are difficult to unit test. It would be better to abstract the logic of this class into an interface or abstract base class which could be easily mocked/stubbed in a unit test to weaken the coupling between the different parts of your code.

Related

How to assert which coroutine dispatcher is used in unit test?

I have a class to unit test that has 2 dispatchers injected, one that is Dispatchers.Main and one that is Dispatchers.IO. My code switches between these two via withContext() at some point and then switches back via .collect(). I would like my unit test for this code to make an assertion on what dispatcher I am supposed to be on. My test code currently just uses two instances of TestCoroutineDispatcher.
Does anyone know how I can do this?
One ok thing I have tried is modifying my production code to include launch(mainDispatcher + CoroutineName("someUniqueName")) { ... } where I can then assert on looking for the substring in Thread.currentThread().name in my unit test, but it really does not feel ideal and rather hacky to modify my prod code just for that.

iOS Unit Testing: XCTestSuite, XCTest, XCTestRun, XCTestCase methods

In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/

How to choose TDD starting point in a real world project?

I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}